Test Report: Docker_Linux_crio 21764

                    
                      d8ceda1a406080ee928dec4912f2c0ffeefd6083:2025-10-18:41957
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.24
35 TestAddons/parallel/Registry 14.96
36 TestAddons/parallel/RegistryCreds 0.39
37 TestAddons/parallel/Ingress 146.82
38 TestAddons/parallel/InspektorGadget 6.26
39 TestAddons/parallel/MetricsServer 5.31
41 TestAddons/parallel/CSI 37.7
42 TestAddons/parallel/Headlamp 2.51
43 TestAddons/parallel/CloudSpanner 5.24
44 TestAddons/parallel/LocalPath 10.08
45 TestAddons/parallel/NvidiaDevicePlugin 5.25
46 TestAddons/parallel/Yakd 6.23
47 TestAddons/parallel/AmdGpuDevicePlugin 5.25
98 TestFunctional/parallel/ServiceCmdConnect 602.83
115 TestFunctional/parallel/ServiceCmd/DeployApp 600.62
124 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.02
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.77
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.85
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.29
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
154 TestFunctional/parallel/ServiceCmd/Format 0.52
155 TestFunctional/parallel/ServiceCmd/URL 0.52
191 TestJSONOutput/pause/Command 2.39
197 TestJSONOutput/unpause/Command 2.07
292 TestPause/serial/Pause 6.62
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.1
301 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.08
310 TestStartStop/group/old-k8s-version/serial/Pause 6.22
316 TestStartStop/group/no-preload/serial/Pause 6.21
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.44
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.11
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.75
337 TestStartStop/group/newest-cni/serial/Pause 6.14
344 TestStartStop/group/embed-certs/serial/Pause 6.46
355 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.1
x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-222746 addons disable volcano --alsologtostderr -v=1: exit status 11 (234.847527ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:01:09.955874  144297 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:01:09.956187  144297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:09.956198  144297 out.go:374] Setting ErrFile to fd 2...
	I1018 09:01:09.956202  144297 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:09.956403  144297 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:01:09.956644  144297 mustload.go:65] Loading cluster: addons-222746
	I1018 09:01:09.957010  144297 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:09.957027  144297 addons.go:606] checking whether the cluster is paused
	I1018 09:01:09.957109  144297 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:09.957121  144297 host.go:66] Checking if "addons-222746" exists ...
	I1018 09:01:09.957507  144297 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 09:01:09.976054  144297 ssh_runner.go:195] Run: systemctl --version
	I1018 09:01:09.976110  144297 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 09:01:09.993045  144297 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 09:01:10.087519  144297 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:01:10.087624  144297 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:01:10.115407  144297 cri.go:89] found id: "3f1e0ab974c3adb0e7bcba7193dbfcbd5250e283b2c1dcb0f52f3542be969c05"
	I1018 09:01:10.115431  144297 cri.go:89] found id: "79c91ae766bdce515a4eda632ca6ecd5c57a4454b7a97ae58fa3e91e32020d28"
	I1018 09:01:10.115437  144297 cri.go:89] found id: "2006d0829aa9898dfa1eea55359c71e06640d1be7b32092f97e7ec9b89f3ea71"
	I1018 09:01:10.115442  144297 cri.go:89] found id: "da6e806e056d472d4816ed9122b9d99d62c91c582363ba61d34c584d97fda56c"
	I1018 09:01:10.115447  144297 cri.go:89] found id: "a4fd53616bc11869d8845646b763db80942a4e2c50ec48595c4bfdcc5950f898"
	I1018 09:01:10.115451  144297 cri.go:89] found id: "edce1d10c783fda9fcbe60cf570e174a37959936ebce9285eb63101e393cd691"
	I1018 09:01:10.115456  144297 cri.go:89] found id: "cc9e7bafa8a6c0370a3968660871e38749909a319d5191d1d16f707458abcfca"
	I1018 09:01:10.115460  144297 cri.go:89] found id: "0f74c115de3ce217c6a4a1e4f5ee891722f4b28a8e3da1f42cb6f42b7b6a7f23"
	I1018 09:01:10.115464  144297 cri.go:89] found id: "1267812961fa39d2e5a477c40f29fdca6aefbcd66ca083b2b5ab8083ed1f114d"
	I1018 09:01:10.115473  144297 cri.go:89] found id: "1c12fcfd58686f7ebc0bf26390f7f9977bab5450b5c08604e1773c936b620473"
	I1018 09:01:10.115481  144297 cri.go:89] found id: "13543c0f3dca2e5d1884741fc895c7bdd56c7ac061eaaba1fc193a2068c33a5d"
	I1018 09:01:10.115485  144297 cri.go:89] found id: "4bf2327b6d921b2a03b0ad6117b7e519262aa09b660a7aeed6481d1dd4d135be"
	I1018 09:01:10.115492  144297 cri.go:89] found id: "3c83994993aa0d26323e700488ed01d676cb6d662f3d4e2d1dcf544bab3ade13"
	I1018 09:01:10.115496  144297 cri.go:89] found id: "430460fa55c7751bc85693cb537d90a449762549c9ce77554e35e334382aa271"
	I1018 09:01:10.115519  144297 cri.go:89] found id: "910f4bbb59848023f2078cd2cfea1447cb68096d3009bcbbe3a3c1320b6f8dde"
	I1018 09:01:10.115538  144297 cri.go:89] found id: "ebae0d10fe53d742d91554d1abb6efb1544b7edb5ced01f3a8a801486ad51fab"
	I1018 09:01:10.115549  144297 cri.go:89] found id: "703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e"
	I1018 09:01:10.115555  144297 cri.go:89] found id: "d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a"
	I1018 09:01:10.115558  144297 cri.go:89] found id: "d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c"
	I1018 09:01:10.115562  144297 cri.go:89] found id: "2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733"
	I1018 09:01:10.115570  144297 cri.go:89] found id: "976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb"
	I1018 09:01:10.115574  144297 cri.go:89] found id: "c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711"
	I1018 09:01:10.115581  144297 cri.go:89] found id: "fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778"
	I1018 09:01:10.115586  144297 cri.go:89] found id: "179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4"
	I1018 09:01:10.115593  144297 cri.go:89] found id: ""
	I1018 09:01:10.115637  144297 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:01:10.129480  144297 out.go:203] 
	W1018 09:01:10.130713  144297 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:01:10.130733  144297 out.go:285] * 
	* 
	W1018 09:01:10.133756  144297 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:01:10.134844  144297 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-222746 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 2.977957ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-72mcl" [096e2907-bbfb-40af-a6e6-f38e2622bacc] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002758329s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-cmg9n" [7c8194e0-87ca-4b4e-829a-1096598061ff] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003079258s
addons_test.go:392: (dbg) Run:  kubectl --context addons-222746 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-222746 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-222746 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.530234996s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 ip
2025/10/18 09:01:33 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-222746 addons disable registry --alsologtostderr -v=1: exit status 11 (231.79468ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:01:33.712227  146439 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:01:33.712488  146439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:33.712497  146439 out.go:374] Setting ErrFile to fd 2...
	I1018 09:01:33.712501  146439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:33.712717  146439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:01:33.712972  146439 mustload.go:65] Loading cluster: addons-222746
	I1018 09:01:33.713311  146439 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:33.713326  146439 addons.go:606] checking whether the cluster is paused
	I1018 09:01:33.713402  146439 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:33.713413  146439 host.go:66] Checking if "addons-222746" exists ...
	I1018 09:01:33.713754  146439 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 09:01:33.732481  146439 ssh_runner.go:195] Run: systemctl --version
	I1018 09:01:33.732546  146439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 09:01:33.750087  146439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 09:01:33.843424  146439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:01:33.843535  146439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:01:33.876166  146439 cri.go:89] found id: "3f1e0ab974c3adb0e7bcba7193dbfcbd5250e283b2c1dcb0f52f3542be969c05"
	I1018 09:01:33.876200  146439 cri.go:89] found id: "79c91ae766bdce515a4eda632ca6ecd5c57a4454b7a97ae58fa3e91e32020d28"
	I1018 09:01:33.876204  146439 cri.go:89] found id: "2006d0829aa9898dfa1eea55359c71e06640d1be7b32092f97e7ec9b89f3ea71"
	I1018 09:01:33.876206  146439 cri.go:89] found id: "da6e806e056d472d4816ed9122b9d99d62c91c582363ba61d34c584d97fda56c"
	I1018 09:01:33.876209  146439 cri.go:89] found id: "a4fd53616bc11869d8845646b763db80942a4e2c50ec48595c4bfdcc5950f898"
	I1018 09:01:33.876214  146439 cri.go:89] found id: "edce1d10c783fda9fcbe60cf570e174a37959936ebce9285eb63101e393cd691"
	I1018 09:01:33.876217  146439 cri.go:89] found id: "cc9e7bafa8a6c0370a3968660871e38749909a319d5191d1d16f707458abcfca"
	I1018 09:01:33.876219  146439 cri.go:89] found id: "0f74c115de3ce217c6a4a1e4f5ee891722f4b28a8e3da1f42cb6f42b7b6a7f23"
	I1018 09:01:33.876221  146439 cri.go:89] found id: "1267812961fa39d2e5a477c40f29fdca6aefbcd66ca083b2b5ab8083ed1f114d"
	I1018 09:01:33.876231  146439 cri.go:89] found id: "1c12fcfd58686f7ebc0bf26390f7f9977bab5450b5c08604e1773c936b620473"
	I1018 09:01:33.876234  146439 cri.go:89] found id: "13543c0f3dca2e5d1884741fc895c7bdd56c7ac061eaaba1fc193a2068c33a5d"
	I1018 09:01:33.876236  146439 cri.go:89] found id: "4bf2327b6d921b2a03b0ad6117b7e519262aa09b660a7aeed6481d1dd4d135be"
	I1018 09:01:33.876239  146439 cri.go:89] found id: "3c83994993aa0d26323e700488ed01d676cb6d662f3d4e2d1dcf544bab3ade13"
	I1018 09:01:33.876241  146439 cri.go:89] found id: "430460fa55c7751bc85693cb537d90a449762549c9ce77554e35e334382aa271"
	I1018 09:01:33.876243  146439 cri.go:89] found id: "910f4bbb59848023f2078cd2cfea1447cb68096d3009bcbbe3a3c1320b6f8dde"
	I1018 09:01:33.876255  146439 cri.go:89] found id: "ebae0d10fe53d742d91554d1abb6efb1544b7edb5ced01f3a8a801486ad51fab"
	I1018 09:01:33.876263  146439 cri.go:89] found id: "703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e"
	I1018 09:01:33.876267  146439 cri.go:89] found id: "d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a"
	I1018 09:01:33.876269  146439 cri.go:89] found id: "d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c"
	I1018 09:01:33.876271  146439 cri.go:89] found id: "2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733"
	I1018 09:01:33.876274  146439 cri.go:89] found id: "976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb"
	I1018 09:01:33.876277  146439 cri.go:89] found id: "c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711"
	I1018 09:01:33.876279  146439 cri.go:89] found id: "fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778"
	I1018 09:01:33.876281  146439 cri.go:89] found id: "179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4"
	I1018 09:01:33.876283  146439 cri.go:89] found id: ""
	I1018 09:01:33.876334  146439 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:01:33.890565  146439 out.go:203] 
	W1018 09:01:33.891793  146439 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:01:33.891810  146439 out.go:285] * 
	* 
	W1018 09:01:33.895011  146439 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:01:33.896123  146439 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-222746 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.96s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.39s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.359817ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-222746
addons_test.go:332: (dbg) Run:  kubectl --context addons-222746 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-222746 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (233.237823ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:01:35.883837  146977 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:01:35.884137  146977 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:35.884148  146977 out.go:374] Setting ErrFile to fd 2...
	I1018 09:01:35.884152  146977 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:35.884422  146977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:01:35.884727  146977 mustload.go:65] Loading cluster: addons-222746
	I1018 09:01:35.885132  146977 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:35.885150  146977 addons.go:606] checking whether the cluster is paused
	I1018 09:01:35.885244  146977 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:35.885259  146977 host.go:66] Checking if "addons-222746" exists ...
	I1018 09:01:35.885642  146977 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 09:01:35.902939  146977 ssh_runner.go:195] Run: systemctl --version
	I1018 09:01:35.902990  146977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 09:01:35.922314  146977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 09:01:36.017416  146977 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:01:36.017517  146977 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:01:36.046280  146977 cri.go:89] found id: "3f1e0ab974c3adb0e7bcba7193dbfcbd5250e283b2c1dcb0f52f3542be969c05"
	I1018 09:01:36.046304  146977 cri.go:89] found id: "79c91ae766bdce515a4eda632ca6ecd5c57a4454b7a97ae58fa3e91e32020d28"
	I1018 09:01:36.046310  146977 cri.go:89] found id: "2006d0829aa9898dfa1eea55359c71e06640d1be7b32092f97e7ec9b89f3ea71"
	I1018 09:01:36.046322  146977 cri.go:89] found id: "da6e806e056d472d4816ed9122b9d99d62c91c582363ba61d34c584d97fda56c"
	I1018 09:01:36.046327  146977 cri.go:89] found id: "a4fd53616bc11869d8845646b763db80942a4e2c50ec48595c4bfdcc5950f898"
	I1018 09:01:36.046332  146977 cri.go:89] found id: "edce1d10c783fda9fcbe60cf570e174a37959936ebce9285eb63101e393cd691"
	I1018 09:01:36.046335  146977 cri.go:89] found id: "cc9e7bafa8a6c0370a3968660871e38749909a319d5191d1d16f707458abcfca"
	I1018 09:01:36.046339  146977 cri.go:89] found id: "0f74c115de3ce217c6a4a1e4f5ee891722f4b28a8e3da1f42cb6f42b7b6a7f23"
	I1018 09:01:36.046343  146977 cri.go:89] found id: "1267812961fa39d2e5a477c40f29fdca6aefbcd66ca083b2b5ab8083ed1f114d"
	I1018 09:01:36.046351  146977 cri.go:89] found id: "1c12fcfd58686f7ebc0bf26390f7f9977bab5450b5c08604e1773c936b620473"
	I1018 09:01:36.046359  146977 cri.go:89] found id: "13543c0f3dca2e5d1884741fc895c7bdd56c7ac061eaaba1fc193a2068c33a5d"
	I1018 09:01:36.046363  146977 cri.go:89] found id: "4bf2327b6d921b2a03b0ad6117b7e519262aa09b660a7aeed6481d1dd4d135be"
	I1018 09:01:36.046371  146977 cri.go:89] found id: "3c83994993aa0d26323e700488ed01d676cb6d662f3d4e2d1dcf544bab3ade13"
	I1018 09:01:36.046376  146977 cri.go:89] found id: "430460fa55c7751bc85693cb537d90a449762549c9ce77554e35e334382aa271"
	I1018 09:01:36.046383  146977 cri.go:89] found id: "910f4bbb59848023f2078cd2cfea1447cb68096d3009bcbbe3a3c1320b6f8dde"
	I1018 09:01:36.046404  146977 cri.go:89] found id: "ebae0d10fe53d742d91554d1abb6efb1544b7edb5ced01f3a8a801486ad51fab"
	I1018 09:01:36.046413  146977 cri.go:89] found id: "703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e"
	I1018 09:01:36.046418  146977 cri.go:89] found id: "d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a"
	I1018 09:01:36.046422  146977 cri.go:89] found id: "d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c"
	I1018 09:01:36.046426  146977 cri.go:89] found id: "2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733"
	I1018 09:01:36.046431  146977 cri.go:89] found id: "976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb"
	I1018 09:01:36.046438  146977 cri.go:89] found id: "c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711"
	I1018 09:01:36.046442  146977 cri.go:89] found id: "fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778"
	I1018 09:01:36.046446  146977 cri.go:89] found id: "179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4"
	I1018 09:01:36.046449  146977 cri.go:89] found id: ""
	I1018 09:01:36.046493  146977 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:01:36.060862  146977 out.go:203] 
	W1018 09:01:36.062216  146977 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:01:36.062233  146977 out.go:285] * 
	* 
	W1018 09:01:36.065228  146977 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:01:36.066428  146977 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-222746 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.39s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-222746 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-222746 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-222746 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [9970e430-c690-446f-9bf5-6992c212595a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [9970e430-c690-446f-9bf5-6992c212595a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003228381s
I1018 09:01:44.697431  134611 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-222746 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.294609902s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-222746 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-222746
helpers_test.go:243: (dbg) docker inspect addons-222746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "08bddbb0d829e38df6d73cb968782b51c192be16aad01170d69de1229844bb60",
	        "Created": "2025-10-18T08:58:48.818383465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 136639,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T08:58:48.849405244Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/08bddbb0d829e38df6d73cb968782b51c192be16aad01170d69de1229844bb60/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/08bddbb0d829e38df6d73cb968782b51c192be16aad01170d69de1229844bb60/hostname",
	        "HostsPath": "/var/lib/docker/containers/08bddbb0d829e38df6d73cb968782b51c192be16aad01170d69de1229844bb60/hosts",
	        "LogPath": "/var/lib/docker/containers/08bddbb0d829e38df6d73cb968782b51c192be16aad01170d69de1229844bb60/08bddbb0d829e38df6d73cb968782b51c192be16aad01170d69de1229844bb60-json.log",
	        "Name": "/addons-222746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-222746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-222746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "08bddbb0d829e38df6d73cb968782b51c192be16aad01170d69de1229844bb60",
	                "LowerDir": "/var/lib/docker/overlay2/591de738026e7bb144eb21eb5220004ccdf3b11d69324757172cf3ad4dcc222a-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/591de738026e7bb144eb21eb5220004ccdf3b11d69324757172cf3ad4dcc222a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/591de738026e7bb144eb21eb5220004ccdf3b11d69324757172cf3ad4dcc222a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/591de738026e7bb144eb21eb5220004ccdf3b11d69324757172cf3ad4dcc222a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-222746",
	                "Source": "/var/lib/docker/volumes/addons-222746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-222746",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-222746",
	                "name.minikube.sigs.k8s.io": "addons-222746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4bfae7c41848df1c2c55af9b1f1cbdbb399d978b3c7814464398ef7c96367b7e",
	            "SandboxKey": "/var/run/docker/netns/4bfae7c41848",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-222746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:31:b6:68:ff:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4c138596d16bfd741a46ad14146c73cfc29e5eb10215236c22d54328825d7e82",
	                    "EndpointID": "f6b238c4c4ea6538597beea4d28b78b001604561f841495c56044574c6452680",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-222746",
	                        "08bddbb0d829"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-222746 -n addons-222746
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-222746 logs -n 25: (1.182879281s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-818527 --alsologtostderr --binary-mirror http://127.0.0.1:41249 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-818527 │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │                     │
	│ delete  │ -p binary-mirror-818527                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-818527 │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │ 18 Oct 25 08:58 UTC │
	│ addons  │ disable dashboard -p addons-222746                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │                     │
	│ addons  │ enable dashboard -p addons-222746                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │                     │
	│ start   │ -p addons-222746 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │ 18 Oct 25 09:01 UTC │
	│ addons  │ addons-222746 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ addons  │ addons-222746 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ addons  │ addons-222746 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ addons  │ addons-222746 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ addons  │ enable headlamp -p addons-222746 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ addons  │ addons-222746 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ ssh     │ addons-222746 ssh cat /opt/local-path-provisioner/pvc-ac0e00a7-7476-4965-b255-10439b12d9d4_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │ 18 Oct 25 09:01 UTC │
	│ addons  │ addons-222746 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ addons  │ addons-222746 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ ip      │ addons-222746 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │ 18 Oct 25 09:01 UTC │
	│ addons  │ addons-222746 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ addons  │ addons-222746 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ addons  │ addons-222746 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-222746                                                                                                                                                                                                                                                                                                                                                                                           │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │ 18 Oct 25 09:01 UTC │
	│ addons  │ addons-222746 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ addons  │ addons-222746 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ ssh     │ addons-222746 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ addons  │ addons-222746 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:02 UTC │                     │
	│ addons  │ addons-222746 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:02 UTC │                     │
	│ ip      │ addons-222746 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-222746        │ jenkins │ v1.37.0 │ 18 Oct 25 09:03 UTC │ 18 Oct 25 09:03 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:58:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:58:25.353444  135984 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:58:25.353561  135984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:58:25.353567  135984 out.go:374] Setting ErrFile to fd 2...
	I1018 08:58:25.353576  135984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:58:25.353815  135984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 08:58:25.354504  135984 out.go:368] Setting JSON to false
	I1018 08:58:25.355410  135984 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2449,"bootTime":1760775456,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:58:25.355502  135984 start.go:141] virtualization: kvm guest
	I1018 08:58:25.357290  135984 out.go:179] * [addons-222746] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 08:58:25.358525  135984 notify.go:220] Checking for updates...
	I1018 08:58:25.358546  135984 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 08:58:25.359687  135984 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:58:25.360794  135984 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 08:58:25.361941  135984 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 08:58:25.362929  135984 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 08:58:25.363919  135984 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 08:58:25.365033  135984 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:58:25.387253  135984 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 08:58:25.387328  135984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:58:25.445043  135984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-18 08:58:25.435927189 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:58:25.445196  135984 docker.go:318] overlay module found
	I1018 08:58:25.447274  135984 out.go:179] * Using the docker driver based on user configuration
	I1018 08:58:25.448505  135984 start.go:305] selected driver: docker
	I1018 08:58:25.448518  135984 start.go:925] validating driver "docker" against <nil>
	I1018 08:58:25.448529  135984 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 08:58:25.449150  135984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:58:25.502458  135984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-18 08:58:25.493371851 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:58:25.502664  135984 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 08:58:25.502909  135984 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:58:25.504391  135984 out.go:179] * Using Docker driver with root privileges
	I1018 08:58:25.505357  135984 cni.go:84] Creating CNI manager for ""
	I1018 08:58:25.505422  135984 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:58:25.505436  135984 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 08:58:25.505509  135984 start.go:349] cluster config:
	{Name:addons-222746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-222746 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1018 08:58:25.506683  135984 out.go:179] * Starting "addons-222746" primary control-plane node in "addons-222746" cluster
	I1018 08:58:25.507697  135984 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 08:58:25.508704  135984 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 08:58:25.509619  135984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:58:25.509661  135984 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 08:58:25.509657  135984 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 08:58:25.509675  135984 cache.go:58] Caching tarball of preloaded images
	I1018 08:58:25.509759  135984 preload.go:233] Found /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 08:58:25.509772  135984 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 08:58:25.510095  135984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/config.json ...
	I1018 08:58:25.510125  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/config.json: {Name:mkdc42a5bc207c1cc977281fa28ebcc7d4fa6a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:25.526787  135984 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 08:58:25.526941  135984 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 08:58:25.526957  135984 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 08:58:25.526961  135984 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 08:58:25.526969  135984 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 08:58:25.526977  135984 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 08:58:38.951436  135984 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 08:58:38.951476  135984 cache.go:232] Successfully downloaded all kic artifacts
	I1018 08:58:38.951516  135984 start.go:360] acquireMachinesLock for addons-222746: {Name:mk3d9c09b09d63a7cc3970bf61c61e1409029565 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 08:58:38.951643  135984 start.go:364] duration metric: took 89.833µs to acquireMachinesLock for "addons-222746"
	I1018 08:58:38.951690  135984 start.go:93] Provisioning new machine with config: &{Name:addons-222746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-222746 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 08:58:38.951776  135984 start.go:125] createHost starting for "" (driver="docker")
	I1018 08:58:38.953450  135984 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 08:58:38.953663  135984 start.go:159] libmachine.API.Create for "addons-222746" (driver="docker")
	I1018 08:58:38.953697  135984 client.go:168] LocalClient.Create starting
	I1018 08:58:38.953799  135984 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem
	I1018 08:58:39.062984  135984 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem
	I1018 08:58:39.678275  135984 cli_runner.go:164] Run: docker network inspect addons-222746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 08:58:39.694746  135984 cli_runner.go:211] docker network inspect addons-222746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 08:58:39.694844  135984 network_create.go:284] running [docker network inspect addons-222746] to gather additional debugging logs...
	I1018 08:58:39.694872  135984 cli_runner.go:164] Run: docker network inspect addons-222746
	W1018 08:58:39.711305  135984 cli_runner.go:211] docker network inspect addons-222746 returned with exit code 1
	I1018 08:58:39.711337  135984 network_create.go:287] error running [docker network inspect addons-222746]: docker network inspect addons-222746: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-222746 not found
	I1018 08:58:39.711374  135984 network_create.go:289] output of [docker network inspect addons-222746]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-222746 not found
	
	** /stderr **
	I1018 08:58:39.711494  135984 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 08:58:39.728479  135984 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ca7c80}
	I1018 08:58:39.728523  135984 network_create.go:124] attempt to create docker network addons-222746 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 08:58:39.728575  135984 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-222746 addons-222746
	I1018 08:58:39.783590  135984 network_create.go:108] docker network addons-222746 192.168.49.0/24 created
	I1018 08:58:39.783622  135984 kic.go:121] calculated static IP "192.168.49.2" for the "addons-222746" container
	I1018 08:58:39.783696  135984 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 08:58:39.799357  135984 cli_runner.go:164] Run: docker volume create addons-222746 --label name.minikube.sigs.k8s.io=addons-222746 --label created_by.minikube.sigs.k8s.io=true
	I1018 08:58:39.816949  135984 oci.go:103] Successfully created a docker volume addons-222746
	I1018 08:58:39.817051  135984 cli_runner.go:164] Run: docker run --rm --name addons-222746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-222746 --entrypoint /usr/bin/test -v addons-222746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 08:58:44.424437  135984 cli_runner.go:217] Completed: docker run --rm --name addons-222746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-222746 --entrypoint /usr/bin/test -v addons-222746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (4.607327013s)
	I1018 08:58:44.424465  135984 oci.go:107] Successfully prepared a docker volume addons-222746
	I1018 08:58:44.424505  135984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:58:44.424528  135984 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 08:58:44.424574  135984 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-222746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 08:58:48.748231  135984 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-222746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.323622023s)
	I1018 08:58:48.748283  135984 kic.go:203] duration metric: took 4.323743083s to extract preloaded images to volume ...
	W1018 08:58:48.748387  135984 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 08:58:48.748421  135984 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 08:58:48.748469  135984 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 08:58:48.803301  135984 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-222746 --name addons-222746 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-222746 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-222746 --network addons-222746 --ip 192.168.49.2 --volume addons-222746:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 08:58:49.059658  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Running}}
	I1018 08:58:49.079452  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:58:49.098941  135984 cli_runner.go:164] Run: docker exec addons-222746 stat /var/lib/dpkg/alternatives/iptables
	I1018 08:58:49.142909  135984 oci.go:144] the created container "addons-222746" has a running status.
	I1018 08:58:49.142946  135984 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa...
	I1018 08:58:49.328458  135984 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 08:58:49.363105  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:58:49.381675  135984 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 08:58:49.381695  135984 kic_runner.go:114] Args: [docker exec --privileged addons-222746 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 08:58:49.432302  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:58:49.451553  135984 machine.go:93] provisionDockerMachine start ...
	I1018 08:58:49.451669  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:49.470094  135984 main.go:141] libmachine: Using SSH client type: native
	I1018 08:58:49.470312  135984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1018 08:58:49.470322  135984 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 08:58:49.601440  135984 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-222746
	
	I1018 08:58:49.601469  135984 ubuntu.go:182] provisioning hostname "addons-222746"
	I1018 08:58:49.601531  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:49.619084  135984 main.go:141] libmachine: Using SSH client type: native
	I1018 08:58:49.619380  135984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1018 08:58:49.619407  135984 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-222746 && echo "addons-222746" | sudo tee /etc/hostname
	I1018 08:58:49.763196  135984 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-222746
	
	I1018 08:58:49.763263  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:49.779905  135984 main.go:141] libmachine: Using SSH client type: native
	I1018 08:58:49.780109  135984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1018 08:58:49.780126  135984 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-222746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-222746/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-222746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 08:58:49.910206  135984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 08:58:49.910240  135984 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 08:58:49.910284  135984 ubuntu.go:190] setting up certificates
	I1018 08:58:49.910302  135984 provision.go:84] configureAuth start
	I1018 08:58:49.910359  135984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-222746
	I1018 08:58:49.927220  135984 provision.go:143] copyHostCerts
	I1018 08:58:49.927287  135984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 08:58:49.927393  135984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 08:58:49.927453  135984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 08:58:49.927507  135984 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.addons-222746 san=[127.0.0.1 192.168.49.2 addons-222746 localhost minikube]
	I1018 08:58:50.214928  135984 provision.go:177] copyRemoteCerts
	I1018 08:58:50.214984  135984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 08:58:50.215017  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:50.231781  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:58:50.326582  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 08:58:50.344960  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 08:58:50.361719  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 08:58:50.377772  135984 provision.go:87] duration metric: took 467.450843ms to configureAuth
	I1018 08:58:50.377803  135984 ubuntu.go:206] setting minikube options for container-runtime
	I1018 08:58:50.378055  135984 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:58:50.378150  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:50.395211  135984 main.go:141] libmachine: Using SSH client type: native
	I1018 08:58:50.395459  135984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1018 08:58:50.395480  135984 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 08:58:50.631215  135984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 08:58:50.631239  135984 machine.go:96] duration metric: took 1.179652002s to provisionDockerMachine
	I1018 08:58:50.631250  135984 client.go:171] duration metric: took 11.677542597s to LocalClient.Create
	I1018 08:58:50.631268  135984 start.go:167] duration metric: took 11.677605196s to libmachine.API.Create "addons-222746"
	I1018 08:58:50.631279  135984 start.go:293] postStartSetup for "addons-222746" (driver="docker")
	I1018 08:58:50.631292  135984 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 08:58:50.631345  135984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 08:58:50.631389  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:50.648401  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:58:50.746184  135984 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 08:58:50.750239  135984 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 08:58:50.750271  135984 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 08:58:50.750286  135984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 08:58:50.750351  135984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 08:58:50.750389  135984 start.go:296] duration metric: took 119.099305ms for postStartSetup
	I1018 08:58:50.750712  135984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-222746
	I1018 08:58:50.768102  135984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/config.json ...
	I1018 08:58:50.768376  135984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 08:58:50.768422  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:50.786497  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:58:50.878752  135984 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 08:58:50.883103  135984 start.go:128] duration metric: took 11.931304054s to createHost
	I1018 08:58:50.883125  135984 start.go:83] releasing machines lock for "addons-222746", held for 11.931468631s
	I1018 08:58:50.883183  135984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-222746
	I1018 08:58:50.899763  135984 ssh_runner.go:195] Run: cat /version.json
	I1018 08:58:50.899802  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:50.899866  135984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 08:58:50.899933  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:50.917042  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:58:50.917351  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:58:51.059990  135984 ssh_runner.go:195] Run: systemctl --version
	I1018 08:58:51.066088  135984 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 08:58:51.098934  135984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 08:58:51.103815  135984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 08:58:51.103927  135984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 08:58:51.128054  135984 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 08:58:51.128073  135984 start.go:495] detecting cgroup driver to use...
	I1018 08:58:51.128102  135984 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 08:58:51.128139  135984 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 08:58:51.143371  135984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 08:58:51.154976  135984 docker.go:218] disabling cri-docker service (if available) ...
	I1018 08:58:51.155022  135984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 08:58:51.170092  135984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 08:58:51.186368  135984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 08:58:51.266725  135984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 08:58:51.351600  135984 docker.go:234] disabling docker service ...
	I1018 08:58:51.351668  135984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 08:58:51.368757  135984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 08:58:51.380906  135984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 08:58:51.462156  135984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 08:58:51.542997  135984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 08:58:51.554883  135984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 08:58:51.569793  135984 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 08:58:51.569861  135984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:58:51.579649  135984 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 08:58:51.579719  135984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:58:51.587997  135984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:58:51.596004  135984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:58:51.604257  135984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 08:58:51.611792  135984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:58:51.619819  135984 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:58:51.632606  135984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:58:51.641050  135984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 08:58:51.648183  135984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 08:58:51.655289  135984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:58:51.731410  135984 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 08:58:51.826783  135984 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 08:58:51.826888  135984 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 08:58:51.830817  135984 start.go:563] Will wait 60s for crictl version
	I1018 08:58:51.830903  135984 ssh_runner.go:195] Run: which crictl
	I1018 08:58:51.834504  135984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 08:58:51.858148  135984 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 08:58:51.858252  135984 ssh_runner.go:195] Run: crio --version
	I1018 08:58:51.884663  135984 ssh_runner.go:195] Run: crio --version
	I1018 08:58:51.913103  135984 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 08:58:51.914317  135984 cli_runner.go:164] Run: docker network inspect addons-222746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 08:58:51.930211  135984 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 08:58:51.934381  135984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 08:58:51.944535  135984 kubeadm.go:883] updating cluster {Name:addons-222746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-222746 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 08:58:51.944678  135984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:58:51.944742  135984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:58:51.974625  135984 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 08:58:51.974648  135984 crio.go:433] Images already preloaded, skipping extraction
	I1018 08:58:51.974712  135984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:58:51.998148  135984 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 08:58:51.998170  135984 cache_images.go:85] Images are preloaded, skipping loading
	I1018 08:58:51.998180  135984 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 08:58:51.998294  135984 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-222746 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-222746 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 08:58:51.998354  135984 ssh_runner.go:195] Run: crio config
	I1018 08:58:52.039385  135984 cni.go:84] Creating CNI manager for ""
	I1018 08:58:52.039415  135984 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:58:52.039441  135984 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 08:58:52.039473  135984 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-222746 NodeName:addons-222746 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 08:58:52.039644  135984 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-222746"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 08:58:52.039715  135984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 08:58:52.047684  135984 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 08:58:52.047743  135984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 08:58:52.055100  135984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 08:58:52.067047  135984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 08:58:52.083221  135984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1018 08:58:52.096523  135984 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 08:58:52.100309  135984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 08:58:52.110233  135984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:58:52.187299  135984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:58:52.209075  135984 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746 for IP: 192.168.49.2
	I1018 08:58:52.209098  135984 certs.go:195] generating shared ca certs ...
	I1018 08:58:52.209117  135984 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:52.209257  135984 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 08:58:52.421213  135984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt ...
	I1018 08:58:52.421249  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt: {Name:mk43cc1d9eca8b1ae9f5477a3ce778748878dcc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:52.421431  135984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key ...
	I1018 08:58:52.421443  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key: {Name:mkd4fd3ac3b76e1f6e249c88a55986a8ea0c2f62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:52.421520  135984 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 08:58:52.805703  135984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt ...
	I1018 08:58:52.805734  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt: {Name:mke5e30a1bcc1bc16d4358d42c0f6b1df1c8176b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:52.805905  135984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key ...
	I1018 08:58:52.805917  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key: {Name:mk399fd0ff439f73c972d782761d754ce8457311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:52.805987  135984 certs.go:257] generating profile certs ...
	I1018 08:58:52.806040  135984 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.key
	I1018 08:58:52.806054  135984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt with IP's: []
	I1018 08:58:53.017845  135984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt ...
	I1018 08:58:53.017882  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: {Name:mke03f832dafda02bdf462f2edad012119921b91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:53.018044  135984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.key ...
	I1018 08:58:53.018055  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.key: {Name:mke1c144541163258131f24fc2889eb68ee0c5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:53.018126  135984 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.key.0804f929
	I1018 08:58:53.018145  135984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.crt.0804f929 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 08:58:53.142977  135984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.crt.0804f929 ...
	I1018 08:58:53.143007  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.crt.0804f929: {Name:mk33a94e3eb4a900d2b65a5fcedd873cda70dd31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:53.143169  135984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.key.0804f929 ...
	I1018 08:58:53.143182  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.key.0804f929: {Name:mkd2592ee160e138c2aee5869cbdabef8281355c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:53.143252  135984 certs.go:382] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.crt.0804f929 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.crt
	I1018 08:58:53.143349  135984 certs.go:386] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.key.0804f929 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.key
	I1018 08:58:53.143407  135984 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/proxy-client.key
	I1018 08:58:53.143426  135984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/proxy-client.crt with IP's: []
	I1018 08:58:53.376923  135984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/proxy-client.crt ...
	I1018 08:58:53.376953  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/proxy-client.crt: {Name:mkc84b0ac1d726976d83f916213be09e6d6be32d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:53.377107  135984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/proxy-client.key ...
	I1018 08:58:53.377122  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/proxy-client.key: {Name:mk64bf062c49f697d92d9d5d0e45f5a0f46edf58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:53.377296  135984 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 08:58:53.377331  135984 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 08:58:53.377354  135984 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 08:58:53.377392  135984 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 08:58:53.378007  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 08:58:53.395691  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 08:58:53.412421  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 08:58:53.429255  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 08:58:53.445567  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 08:58:53.462487  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 08:58:53.478999  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 08:58:53.495026  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 08:58:53.511687  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 08:58:53.529895  135984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 08:58:53.541478  135984 ssh_runner.go:195] Run: openssl version
	I1018 08:58:53.547220  135984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 08:58:53.557377  135984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:58:53.561107  135984 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:58:53.561159  135984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:58:53.595948  135984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 08:58:53.604849  135984 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 08:58:53.608487  135984 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 08:58:53.608534  135984 kubeadm.go:400] StartCluster: {Name:addons-222746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-222746 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:58:53.608594  135984 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:58:53.608652  135984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:58:53.633996  135984 cri.go:89] found id: ""
	I1018 08:58:53.634086  135984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 08:58:53.642438  135984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 08:58:53.650776  135984 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 08:58:53.650852  135984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 08:58:53.658932  135984 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 08:58:53.658958  135984 kubeadm.go:157] found existing configuration files:
	
	I1018 08:58:53.659009  135984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 08:58:53.667800  135984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 08:58:53.667877  135984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 08:58:53.675669  135984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 08:58:53.683001  135984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 08:58:53.683062  135984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 08:58:53.690186  135984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 08:58:53.697379  135984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 08:58:53.697426  135984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 08:58:53.704535  135984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 08:58:53.711853  135984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 08:58:53.711913  135984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 08:58:53.719045  135984 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 08:58:53.752321  135984 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 08:58:53.752416  135984 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 08:58:53.771967  135984 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 08:58:53.772044  135984 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 08:58:53.772090  135984 kubeadm.go:318] OS: Linux
	I1018 08:58:53.772154  135984 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 08:58:53.772224  135984 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 08:58:53.772303  135984 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 08:58:53.772373  135984 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 08:58:53.772448  135984 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 08:58:53.772516  135984 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 08:58:53.772598  135984 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 08:58:53.772672  135984 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 08:58:53.825884  135984 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 08:58:53.826016  135984 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 08:58:53.826131  135984 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 08:58:53.832977  135984 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 08:58:53.834856  135984 out.go:252]   - Generating certificates and keys ...
	I1018 08:58:53.834953  135984 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 08:58:53.835052  135984 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 08:58:54.114772  135984 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 08:58:54.397739  135984 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 08:58:54.473360  135984 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 08:58:54.799336  135984 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 08:58:55.021604  135984 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 08:58:55.021794  135984 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-222746 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 08:58:55.080169  135984 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 08:58:55.080381  135984 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-222746 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 08:58:55.674976  135984 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 08:58:55.844281  135984 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 08:58:56.026064  135984 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 08:58:56.026130  135984 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 08:58:56.285221  135984 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 08:58:56.588454  135984 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 08:58:56.990256  135984 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 08:58:57.517914  135984 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 08:58:57.664020  135984 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 08:58:57.664391  135984 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 08:58:57.667805  135984 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 08:58:57.669278  135984 out.go:252]   - Booting up control plane ...
	I1018 08:58:57.669402  135984 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 08:58:57.669518  135984 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 08:58:57.670027  135984 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 08:58:57.684022  135984 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 08:58:57.684155  135984 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 08:58:57.690547  135984 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 08:58:57.690792  135984 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 08:58:57.690869  135984 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 08:58:57.783910  135984 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 08:58:57.784101  135984 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 08:58:58.785656  135984 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001972199s
	I1018 08:58:58.788468  135984 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 08:58:58.788594  135984 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 08:58:58.788735  135984 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 08:58:58.788902  135984 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 08:59:00.619301  135984 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.830769446s
	I1018 08:59:01.021076  135984 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.232454951s
	I1018 08:59:02.290444  135984 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501905376s
	I1018 08:59:02.300150  135984 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 08:59:02.309036  135984 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 08:59:02.316583  135984 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 08:59:02.316914  135984 kubeadm.go:318] [mark-control-plane] Marking the node addons-222746 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 08:59:02.324662  135984 kubeadm.go:318] [bootstrap-token] Using token: ysi78m.ifkobpqrcrut0qeu
	I1018 08:59:02.326067  135984 out.go:252]   - Configuring RBAC rules ...
	I1018 08:59:02.326221  135984 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 08:59:02.328913  135984 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 08:59:02.333394  135984 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 08:59:02.336153  135984 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 08:59:02.338141  135984 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 08:59:02.340196  135984 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 08:59:02.696432  135984 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 08:59:03.108635  135984 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 08:59:03.696061  135984 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 08:59:03.696868  135984 kubeadm.go:318] 
	I1018 08:59:03.696995  135984 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 08:59:03.697015  135984 kubeadm.go:318] 
	I1018 08:59:03.697133  135984 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 08:59:03.697143  135984 kubeadm.go:318] 
	I1018 08:59:03.697186  135984 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 08:59:03.697276  135984 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 08:59:03.697360  135984 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 08:59:03.697370  135984 kubeadm.go:318] 
	I1018 08:59:03.697446  135984 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 08:59:03.697472  135984 kubeadm.go:318] 
	I1018 08:59:03.697568  135984 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 08:59:03.697580  135984 kubeadm.go:318] 
	I1018 08:59:03.697673  135984 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 08:59:03.697757  135984 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 08:59:03.697835  135984 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 08:59:03.697847  135984 kubeadm.go:318] 
	I1018 08:59:03.697929  135984 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 08:59:03.697995  135984 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 08:59:03.698007  135984 kubeadm.go:318] 
	I1018 08:59:03.698075  135984 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ysi78m.ifkobpqrcrut0qeu \
	I1018 08:59:03.698195  135984 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f \
	I1018 08:59:03.698237  135984 kubeadm.go:318] 	--control-plane 
	I1018 08:59:03.698242  135984 kubeadm.go:318] 
	I1018 08:59:03.698348  135984 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 08:59:03.698360  135984 kubeadm.go:318] 
	I1018 08:59:03.698452  135984 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ysi78m.ifkobpqrcrut0qeu \
	I1018 08:59:03.698553  135984 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f 
	I1018 08:59:03.700182  135984 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 08:59:03.700283  135984 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 08:59:03.700308  135984 cni.go:84] Creating CNI manager for ""
	I1018 08:59:03.700318  135984 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:59:03.702448  135984 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 08:59:03.703566  135984 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 08:59:03.707818  135984 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 08:59:03.707918  135984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 08:59:03.720632  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 08:59:03.909971  135984 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 08:59:03.910043  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:03.910050  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-222746 minikube.k8s.io/updated_at=2025_10_18T08_59_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=addons-222746 minikube.k8s.io/primary=true
	I1018 08:59:03.919720  135984 ops.go:34] apiserver oom_adj: -16
	I1018 08:59:03.990591  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:04.491213  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:04.990936  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:05.491029  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:05.991003  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:06.491220  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:06.991286  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:07.490924  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:07.552618  135984 kubeadm.go:1113] duration metric: took 3.642633599s to wait for elevateKubeSystemPrivileges
	I1018 08:59:07.552662  135984 kubeadm.go:402] duration metric: took 13.944131015s to StartCluster
	I1018 08:59:07.552697  135984 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:59:07.552813  135984 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 08:59:07.553339  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:59:07.553574  135984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 08:59:07.553563  135984 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 08:59:07.553587  135984 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 08:59:07.553737  135984 addons.go:69] Setting yakd=true in profile "addons-222746"
	I1018 08:59:07.553740  135984 addons.go:69] Setting ingress=true in profile "addons-222746"
	I1018 08:59:07.553770  135984 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-222746"
	I1018 08:59:07.553788  135984 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:59:07.553780  135984 addons.go:69] Setting metrics-server=true in profile "addons-222746"
	I1018 08:59:07.553793  135984 addons.go:238] Setting addon ingress=true in "addons-222746"
	I1018 08:59:07.553780  135984 addons.go:69] Setting ingress-dns=true in profile "addons-222746"
	I1018 08:59:07.553840  135984 addons.go:238] Setting addon metrics-server=true in "addons-222746"
	I1018 08:59:07.553850  135984 addons.go:238] Setting addon ingress-dns=true in "addons-222746"
	I1018 08:59:07.553859  135984 addons.go:69] Setting volcano=true in profile "addons-222746"
	I1018 08:59:07.553872  135984 addons.go:69] Setting volumesnapshots=true in profile "addons-222746"
	I1018 08:59:07.553883  135984 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-222746"
	I1018 08:59:07.553765  135984 addons.go:238] Setting addon yakd=true in "addons-222746"
	I1018 08:59:07.553899  135984 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-222746"
	I1018 08:59:07.553905  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.553883  135984 addons.go:69] Setting storage-provisioner=true in profile "addons-222746"
	I1018 08:59:07.553914  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.553922  135984 addons.go:69] Setting inspektor-gadget=true in profile "addons-222746"
	I1018 08:59:07.553933  135984 addons.go:238] Setting addon storage-provisioner=true in "addons-222746"
	I1018 08:59:07.553938  135984 addons.go:238] Setting addon inspektor-gadget=true in "addons-222746"
	I1018 08:59:07.553955  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.553993  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.554012  135984 addons.go:69] Setting registry-creds=true in profile "addons-222746"
	I1018 08:59:07.554007  135984 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-222746"
	I1018 08:59:07.554031  135984 addons.go:238] Setting addon registry-creds=true in "addons-222746"
	I1018 08:59:07.554055  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.554059  135984 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-222746"
	I1018 08:59:07.554084  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.554359  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.554493  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.553861  135984 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-222746"
	I1018 08:59:07.554502  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.554512  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.554518  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.554524  135984 addons.go:69] Setting cloud-spanner=true in profile "addons-222746"
	I1018 08:59:07.554535  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.554536  135984 addons.go:238] Setting addon cloud-spanner=true in "addons-222746"
	I1018 08:59:07.554560  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.554993  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.553885  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.554513  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.554518  135984 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-222746"
	I1018 08:59:07.556391  135984 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-222746"
	I1018 08:59:07.556423  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.556471  135984 addons.go:69] Setting default-storageclass=true in profile "addons-222746"
	I1018 08:59:07.556484  135984 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-222746"
	I1018 08:59:07.556897  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.555971  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.553909  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.553874  135984 addons.go:238] Setting addon volcano=true in "addons-222746"
	I1018 08:59:07.557439  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.553995  135984 addons.go:69] Setting registry=true in profile "addons-222746"
	I1018 08:59:07.557719  135984 addons.go:238] Setting addon registry=true in "addons-222746"
	I1018 08:59:07.557768  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.554496  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.559560  135984 addons.go:69] Setting gcp-auth=true in profile "addons-222746"
	I1018 08:59:07.553889  135984 addons.go:238] Setting addon volumesnapshots=true in "addons-222746"
	I1018 08:59:07.559878  135984 mustload.go:65] Loading cluster: addons-222746
	I1018 08:59:07.560233  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.560470  135984 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:59:07.559416  135984 out.go:179] * Verifying Kubernetes components...
	I1018 08:59:07.561955  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.562172  135984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:59:07.562601  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.563169  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.569020  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.569020  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.569939  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.571327  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.608037  135984 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 08:59:07.609378  135984 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 08:59:07.609415  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 08:59:07.609494  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.618429  135984 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 08:59:07.619225  135984 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 08:59:07.623085  135984 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-222746"
	I1018 08:59:07.625294  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.625803  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.626521  135984 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 08:59:07.626535  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 08:59:07.626601  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.627239  135984 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 08:59:07.627258  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 08:59:07.627294  135984 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 08:59:07.627364  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 08:59:07.627310  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.627591  135984 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 08:59:07.629019  135984 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 08:59:07.629059  135984 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 08:59:07.629072  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 08:59:07.629131  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.629306  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 08:59:07.629459  135984 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:59:07.629472  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 08:59:07.629527  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.630053  135984 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 08:59:07.630101  135984 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 08:59:07.630152  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.633948  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 08:59:07.635176  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 08:59:07.637581  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 08:59:07.640368  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 08:59:07.643957  135984 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 08:59:07.645346  135984 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 08:59:07.645406  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 08:59:07.645461  135984 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 08:59:07.645484  135984 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 08:59:07.645563  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	W1018 08:59:07.646814  135984 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 08:59:07.647992  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 08:59:07.648077  135984 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:59:07.648679  135984 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 08:59:07.649415  135984 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 08:59:07.649438  135984 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 08:59:07.649512  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.652283  135984 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:59:07.653003  135984 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 08:59:07.653520  135984 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 08:59:07.653546  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 08:59:07.653606  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.659302  135984 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 08:59:07.659329  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 08:59:07.659390  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.662793  135984 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 08:59:07.663915  135984 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 08:59:07.663934  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 08:59:07.663993  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.676026  135984 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 08:59:07.677086  135984 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 08:59:07.677472  135984 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 08:59:07.677489  135984 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 08:59:07.677552  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.678466  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 08:59:07.679648  135984 out.go:179]   - Using image docker.io/busybox:stable
	I1018 08:59:07.679723  135984 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 08:59:07.679739  135984 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 08:59:07.679796  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.681071  135984 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 08:59:07.681090  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 08:59:07.681148  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.690319  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.695420  135984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 08:59:07.697408  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.698003  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.704361  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.706328  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.714055  135984 addons.go:238] Setting addon default-storageclass=true in "addons-222746"
	I1018 08:59:07.717991  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.722216  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.724681  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.729058  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.732893  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.733306  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.740348  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.744317  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.749394  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.753676  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.757650  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.763688  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	W1018 08:59:07.765805  135984 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 08:59:07.766118  135984 retry.go:31] will retry after 310.191667ms: ssh: handshake failed: EOF
	I1018 08:59:07.774143  135984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:59:07.776311  135984 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 08:59:07.776431  135984 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 08:59:07.776497  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.808052  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.874664  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 08:59:07.885050  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:59:07.886777  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 08:59:07.887164  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 08:59:07.901527  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 08:59:07.917323  135984 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 08:59:07.917348  135984 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 08:59:07.918328  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 08:59:07.922345  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 08:59:07.929273  135984 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 08:59:07.929299  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 08:59:07.934058  135984 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 08:59:07.934081  135984 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 08:59:07.934552  135984 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 08:59:07.934569  135984 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 08:59:07.938564  135984 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 08:59:07.938585  135984 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 08:59:07.940110  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 08:59:07.965581  135984 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 08:59:07.965635  135984 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 08:59:07.973143  135984 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 08:59:07.973174  135984 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 08:59:07.982424  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 08:59:07.991133  135984 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 08:59:07.991219  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 08:59:07.991672  135984 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 08:59:07.991835  135984 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 08:59:07.993893  135984 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 08:59:07.993916  135984 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 08:59:08.006492  135984 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 08:59:08.006519  135984 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 08:59:08.032170  135984 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 08:59:08.032204  135984 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 08:59:08.035068  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 08:59:08.038202  135984 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 08:59:08.038286  135984 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 08:59:08.045416  135984 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 08:59:08.045438  135984 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 08:59:08.046026  135984 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 08:59:08.046086  135984 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 08:59:08.089346  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 08:59:08.089887  135984 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 08:59:08.089967  135984 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 08:59:08.102677  135984 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 08:59:08.102767  135984 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 08:59:08.103590  135984 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 08:59:08.103612  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 08:59:08.134715  135984 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 08:59:08.136859  135984 node_ready.go:35] waiting up to 6m0s for node "addons-222746" to be "Ready" ...
	I1018 08:59:08.155977  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 08:59:08.163549  135984 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:59:08.163646  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 08:59:08.185469  135984 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 08:59:08.185561  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 08:59:08.230286  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:59:08.239011  135984 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 08:59:08.239035  135984 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 08:59:08.295408  135984 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:08.295503  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 08:59:08.296281  135984 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 08:59:08.296304  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 08:59:08.342682  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:08.368297  135984 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 08:59:08.368328  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 08:59:08.435523  135984 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 08:59:08.435569  135984 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 08:59:08.484647  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 08:59:08.641507  135984 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-222746" context rescaled to 1 replicas
	I1018 08:59:09.111889  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.189382354s)
	I1018 08:59:09.111937  135984 addons.go:479] Verifying addon ingress=true in "addons-222746"
	I1018 08:59:09.111895  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.171752638s)
	I1018 08:59:09.111974  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.076827285s)
	I1018 08:59:09.112000  135984 addons.go:479] Verifying addon registry=true in "addons-222746"
	I1018 08:59:09.111934  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.129473176s)
	I1018 08:59:09.112033  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.022660496s)
	I1018 08:59:09.112049  135984 addons.go:479] Verifying addon metrics-server=true in "addons-222746"
	I1018 08:59:09.114227  135984 out.go:179] * Verifying registry addon...
	I1018 08:59:09.114242  135984 out.go:179] * Verifying ingress addon...
	I1018 08:59:09.114227  135984 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-222746 service yakd-dashboard -n yakd-dashboard
	
	I1018 08:59:09.116982  135984 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 08:59:09.116982  135984 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 08:59:09.119389  135984 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 08:59:09.119425  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:09.119463  135984 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 08:59:09.543910  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.31350952s)
	W1018 08:59:09.543969  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 08:59:09.543995  135984 retry.go:31] will retry after 175.552228ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 08:59:09.544052  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.201330957s)
	W1018 08:59:09.544095  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:09.544114  135984 retry.go:31] will retry after 148.861562ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:09.544303  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.059599111s)
	I1018 08:59:09.544335  135984 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-222746"
	I1018 08:59:09.546040  135984 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 08:59:09.548294  135984 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 08:59:09.552020  135984 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 08:59:09.552044  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:09.653259  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:09.653502  135984 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 08:59:09.653518  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:09.693418  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:09.719949  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:59:10.051354  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:59:10.140122  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:10.151939  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:10.152157  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:10.254739  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:10.254777  135984 retry.go:31] will retry after 403.262262ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:10.551493  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:10.619867  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:10.620004  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:10.659145  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:11.051788  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:11.152298  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:11.152413  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:11.551240  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:11.619800  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:11.620020  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:12.051188  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:12.151240  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:12.151392  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:12.198644  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.478635708s)
	I1018 08:59:12.198692  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.539509425s)
	W1018 08:59:12.198742  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:12.198770  135984 retry.go:31] will retry after 708.576252ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:12.551295  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:12.619668  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:12.619815  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:12.640210  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:12.908084  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:13.052676  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:13.153585  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:13.153803  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:13.435669  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:13.435702  135984 retry.go:31] will retry after 488.395258ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:13.551910  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:13.620106  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:13.620274  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:13.925178  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:14.051472  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:14.151955  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:14.152134  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:14.443058  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:14.443094  135984 retry.go:31] will retry after 958.977433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:14.551218  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:14.619673  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:14.619867  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:15.051557  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:59:15.139534  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:15.152120  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:15.152255  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:15.310975  135984 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 08:59:15.311038  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:15.328003  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:15.402898  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:15.435573  135984 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 08:59:15.448487  135984 addons.go:238] Setting addon gcp-auth=true in "addons-222746"
	I1018 08:59:15.448551  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:15.449008  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:15.468892  135984 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 08:59:15.468946  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:15.487676  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:15.551859  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:15.619991  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:15.620128  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:15.938559  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:15.938608  135984 retry.go:31] will retry after 1.511050601s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:15.940303  135984 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 08:59:15.941613  135984 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:59:15.942638  135984 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 08:59:15.942651  135984 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 08:59:15.955641  135984 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 08:59:15.955667  135984 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 08:59:15.968434  135984 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 08:59:15.968458  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 08:59:15.980747  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 08:59:16.051532  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:16.120612  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:16.120709  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:16.268201  135984 addons.go:479] Verifying addon gcp-auth=true in "addons-222746"
	I1018 08:59:16.269451  135984 out.go:179] * Verifying gcp-auth addon...
	I1018 08:59:16.271537  135984 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 08:59:16.273582  135984 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 08:59:16.273600  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:16.551266  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:16.619913  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:16.620077  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:16.774717  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:17.051601  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:17.120045  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:17.120196  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:17.139986  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:17.274430  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:17.450676  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:17.551646  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:17.620427  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:17.620643  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:17.775271  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:59:17.976074  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:17.976107  135984 retry.go:31] will retry after 3.440906777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:18.051571  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:18.120059  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:18.120198  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:18.275208  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:18.551886  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:18.620503  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:18.620579  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:18.774484  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:19.051237  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:19.120059  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:19.120228  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:19.274657  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:19.551255  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:19.619667  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:19.619935  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:19.640050  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:19.774670  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:20.051451  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:20.119921  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:20.120043  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:20.274724  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:20.551286  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:20.619876  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:20.619995  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:20.774758  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:21.051774  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:21.120109  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:21.120278  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:21.275288  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:21.417468  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:21.551699  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:21.620937  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:21.620989  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:21.640094  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:21.774894  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:59:21.940477  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:21.940506  135984 retry.go:31] will retry after 4.245475929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:22.050960  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:22.120417  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:22.120569  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:22.274257  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:22.550895  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:22.620492  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:22.620592  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:22.774513  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:23.051506  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:23.120055  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:23.120112  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:23.275071  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:23.552051  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:23.619615  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:23.619662  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:23.774584  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:24.051441  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:24.119946  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:24.120109  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:24.139529  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:24.274285  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:24.550837  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:24.620308  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:24.620386  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:24.773902  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:25.051869  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:25.120195  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:25.120314  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:25.275062  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:25.550909  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:25.620266  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:25.620474  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:25.773950  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:26.051629  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:26.120123  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:26.120357  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:26.186726  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:26.275248  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:26.550959  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:26.619407  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:26.619488  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:26.639628  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	W1018 08:59:26.702426  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:26.702460  135984 retry.go:31] will retry after 9.415003353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:26.775072  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:27.052051  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:27.122104  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:27.122476  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:27.274243  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:27.550726  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:27.620245  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:27.620376  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:27.774986  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:28.051727  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:28.120137  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:28.120329  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:28.274082  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:28.551679  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:28.620031  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:28.620094  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:28.774573  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:29.051598  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:29.120084  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:29.120199  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:29.139335  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:29.274966  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:29.551475  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:29.619949  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:29.620139  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:29.775043  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:30.051556  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:30.119850  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:30.120034  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:30.274716  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:30.551370  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:30.619773  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:30.620092  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:30.775480  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:31.051306  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:31.119696  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:31.119809  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:31.139933  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:31.274836  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:31.551217  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:31.619739  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:31.619946  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:31.774798  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:32.051308  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:32.119509  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:32.119688  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:32.274103  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:32.551768  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:32.620067  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:32.620204  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:32.774558  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:33.051572  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:33.119889  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:33.120009  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:33.140056  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:33.274695  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:33.551351  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:33.619864  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:33.619979  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:33.774806  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:34.051755  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:34.120150  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:34.120322  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:34.274023  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:34.551979  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:34.620508  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:34.620605  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:34.774526  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:35.051290  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:35.119783  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:35.120015  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:35.140344  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:35.274883  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:35.551853  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:35.620051  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:35.620162  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:35.774581  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:36.051442  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:36.117617  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:36.120098  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:36.120145  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:36.274437  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:36.551061  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:36.619815  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:36.619958  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:36.640891  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:36.640931  135984 retry.go:31] will retry after 9.655087572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:36.774325  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:37.051349  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:37.119890  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:37.119892  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:37.274603  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:37.551210  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:37.619709  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:37.619767  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 08:59:37.640160  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:37.774704  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:38.051286  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:38.119509  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:38.119651  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:38.273994  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:38.551579  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:38.619937  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:38.620059  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:38.774606  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:39.051302  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:39.119650  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:39.119838  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:39.274307  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:39.550936  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:39.620496  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:39.620555  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:39.774398  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:40.051041  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:40.120341  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:40.120622  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:40.139896  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:40.274106  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:40.550444  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:40.619706  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:40.619935  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:40.774191  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:41.050945  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:41.120330  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:41.120515  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:41.274057  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:41.552143  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:41.619555  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:41.619778  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:41.774572  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:42.051401  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:42.120741  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:42.120984  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:42.140032  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:42.274600  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:42.551456  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:42.620041  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:42.620160  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:42.774007  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:43.051946  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:43.120161  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:43.120392  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:43.274946  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:43.551426  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:43.619850  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:43.619930  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:43.774758  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:44.051518  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:44.119903  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:44.119944  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:44.274543  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:44.551173  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:44.619537  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:44.619673  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:44.640161  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:44.774706  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:45.051408  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:45.119748  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:45.119903  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:45.274912  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:45.550925  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:45.620289  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:45.620473  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:45.774271  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:46.050814  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:46.120187  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:46.120352  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:46.274433  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:46.296623  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:46.551306  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:46.620403  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:46.620526  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:46.774456  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:59:46.817604  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:46.817642  135984 retry.go:31] will retry after 15.11360554s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:47.051178  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:47.119588  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:47.119783  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:47.139758  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:47.274208  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:47.550941  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:47.620258  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:47.620371  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:47.774101  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:48.051878  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:48.120278  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:48.120453  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:48.274965  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:48.551300  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:48.619650  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:48.619768  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:48.774991  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:49.051295  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:49.119774  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:49.119939  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:49.140058  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:49.274767  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:49.551293  135984 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 08:59:49.551319  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:49.620155  135984 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 08:59:49.620175  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:49.620229  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:49.639340  135984 node_ready.go:49] node "addons-222746" is "Ready"
	I1018 08:59:49.639365  135984 node_ready.go:38] duration metric: took 41.502476687s for node "addons-222746" to be "Ready" ...
	I1018 08:59:49.639380  135984 api_server.go:52] waiting for apiserver process to appear ...
	I1018 08:59:49.639430  135984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:59:49.652438  135984 api_server.go:72] duration metric: took 42.098773159s to wait for apiserver process to appear ...
	I1018 08:59:49.652466  135984 api_server.go:88] waiting for apiserver healthz status ...
	I1018 08:59:49.652484  135984 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 08:59:49.656432  135984 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 08:59:49.657391  135984 api_server.go:141] control plane version: v1.34.1
	I1018 08:59:49.657414  135984 api_server.go:131] duration metric: took 4.941534ms to wait for apiserver health ...
	I1018 08:59:49.657423  135984 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 08:59:49.661039  135984 system_pods.go:59] 20 kube-system pods found
	I1018 08:59:49.661069  135984 system_pods.go:61] "amd-gpu-device-plugin-mcrsn" [24d32363-f3e5-407a-8e70-71806f45792c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:59:49.661078  135984 system_pods.go:61] "coredns-66bc5c9577-x2kv4" [5b9a4bb9-972b-4617-8a14-16ede824ef25] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:59:49.661085  135984 system_pods.go:61] "csi-hostpath-attacher-0" [4809ca99-ca91-4c73-95df-add141ce5e05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:59:49.661090  135984 system_pods.go:61] "csi-hostpath-resizer-0" [eae06236-c3ea-434c-8db2-116c1d0e33e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:59:49.661097  135984 system_pods.go:61] "csi-hostpathplugin-qqwps" [9dca2d2e-2ba5-4f16-9edd-dd822f74bc8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:59:49.661104  135984 system_pods.go:61] "etcd-addons-222746" [d7a03d58-c2bc-4461-abc5-1fbc5b24a5e8] Running
	I1018 08:59:49.661108  135984 system_pods.go:61] "kindnet-lxcvf" [2b46724b-6509-48f7-b259-a92095b8770c] Running
	I1018 08:59:49.661112  135984 system_pods.go:61] "kube-apiserver-addons-222746" [fd8775d1-53b8-4c5d-8208-9e68f50abd7a] Running
	I1018 08:59:49.661116  135984 system_pods.go:61] "kube-controller-manager-addons-222746" [88a8b9a0-7528-497e-bf04-7eafcfa42bd7] Running
	I1018 08:59:49.661123  135984 system_pods.go:61] "kube-ingress-dns-minikube" [65ce4998-62ad-4210-b58f-fb087d2c1c73] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:59:49.661127  135984 system_pods.go:61] "kube-proxy-pcfd2" [93edc2e1-f72d-4a33-bb10-8411bcd53919] Running
	I1018 08:59:49.661130  135984 system_pods.go:61] "kube-scheduler-addons-222746" [d75e1da5-9232-4c2b-88ca-6ce5f3617049] Running
	I1018 08:59:49.661135  135984 system_pods.go:61] "metrics-server-85b7d694d7-54dxd" [e6866e3c-67ee-41f0-998a-96a4683c915e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:59:49.661143  135984 system_pods.go:61] "nvidia-device-plugin-daemonset-bmgjg" [2fef3cca-f68a-4cf3-9e0c-1d408f959feb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:59:49.661148  135984 system_pods.go:61] "registry-6b586f9694-72mcl" [096e2907-bbfb-40af-a6e6-f38e2622bacc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:59:49.661153  135984 system_pods.go:61] "registry-creds-764b6fb674-pmfcj" [0e6a1e2c-6788-4106-9ccc-28d1199b2ffe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:59:49.661157  135984 system_pods.go:61] "registry-proxy-cmg9n" [7c8194e0-87ca-4b4e-829a-1096598061ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:59:49.661167  135984 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fg66r" [ed678d90-2372-472e-80ff-a9164446ebe0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:59:49.661177  135984 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mnxz4" [b866bd05-e3a1-443c-9e26-ff9afb7fd032] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:59:49.661184  135984 system_pods.go:61] "storage-provisioner" [faa9b206-8596-40b1-b74a-19d06417800c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:59:49.661192  135984 system_pods.go:74] duration metric: took 3.763864ms to wait for pod list to return data ...
	I1018 08:59:49.661200  135984 default_sa.go:34] waiting for default service account to be created ...
	I1018 08:59:49.663219  135984 default_sa.go:45] found service account: "default"
	I1018 08:59:49.663236  135984 default_sa.go:55] duration metric: took 2.031591ms for default service account to be created ...
	I1018 08:59:49.663244  135984 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 08:59:49.666189  135984 system_pods.go:86] 20 kube-system pods found
	I1018 08:59:49.666215  135984 system_pods.go:89] "amd-gpu-device-plugin-mcrsn" [24d32363-f3e5-407a-8e70-71806f45792c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:59:49.666223  135984 system_pods.go:89] "coredns-66bc5c9577-x2kv4" [5b9a4bb9-972b-4617-8a14-16ede824ef25] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:59:49.666229  135984 system_pods.go:89] "csi-hostpath-attacher-0" [4809ca99-ca91-4c73-95df-add141ce5e05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:59:49.666234  135984 system_pods.go:89] "csi-hostpath-resizer-0" [eae06236-c3ea-434c-8db2-116c1d0e33e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:59:49.666239  135984 system_pods.go:89] "csi-hostpathplugin-qqwps" [9dca2d2e-2ba5-4f16-9edd-dd822f74bc8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:59:49.666243  135984 system_pods.go:89] "etcd-addons-222746" [d7a03d58-c2bc-4461-abc5-1fbc5b24a5e8] Running
	I1018 08:59:49.666247  135984 system_pods.go:89] "kindnet-lxcvf" [2b46724b-6509-48f7-b259-a92095b8770c] Running
	I1018 08:59:49.666253  135984 system_pods.go:89] "kube-apiserver-addons-222746" [fd8775d1-53b8-4c5d-8208-9e68f50abd7a] Running
	I1018 08:59:49.666256  135984 system_pods.go:89] "kube-controller-manager-addons-222746" [88a8b9a0-7528-497e-bf04-7eafcfa42bd7] Running
	I1018 08:59:49.666262  135984 system_pods.go:89] "kube-ingress-dns-minikube" [65ce4998-62ad-4210-b58f-fb087d2c1c73] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:59:49.666265  135984 system_pods.go:89] "kube-proxy-pcfd2" [93edc2e1-f72d-4a33-bb10-8411bcd53919] Running
	I1018 08:59:49.666269  135984 system_pods.go:89] "kube-scheduler-addons-222746" [d75e1da5-9232-4c2b-88ca-6ce5f3617049] Running
	I1018 08:59:49.666277  135984 system_pods.go:89] "metrics-server-85b7d694d7-54dxd" [e6866e3c-67ee-41f0-998a-96a4683c915e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:59:49.666283  135984 system_pods.go:89] "nvidia-device-plugin-daemonset-bmgjg" [2fef3cca-f68a-4cf3-9e0c-1d408f959feb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:59:49.666292  135984 system_pods.go:89] "registry-6b586f9694-72mcl" [096e2907-bbfb-40af-a6e6-f38e2622bacc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:59:49.666297  135984 system_pods.go:89] "registry-creds-764b6fb674-pmfcj" [0e6a1e2c-6788-4106-9ccc-28d1199b2ffe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:59:49.666306  135984 system_pods.go:89] "registry-proxy-cmg9n" [7c8194e0-87ca-4b4e-829a-1096598061ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:59:49.666311  135984 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fg66r" [ed678d90-2372-472e-80ff-a9164446ebe0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:59:49.666317  135984 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mnxz4" [b866bd05-e3a1-443c-9e26-ff9afb7fd032] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:59:49.666322  135984 system_pods.go:89] "storage-provisioner" [faa9b206-8596-40b1-b74a-19d06417800c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:59:49.666336  135984 retry.go:31] will retry after 299.950718ms: missing components: kube-dns
	I1018 08:59:49.776568  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:49.970569  135984 system_pods.go:86] 20 kube-system pods found
	I1018 08:59:49.970601  135984 system_pods.go:89] "amd-gpu-device-plugin-mcrsn" [24d32363-f3e5-407a-8e70-71806f45792c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:59:49.970608  135984 system_pods.go:89] "coredns-66bc5c9577-x2kv4" [5b9a4bb9-972b-4617-8a14-16ede824ef25] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:59:49.970620  135984 system_pods.go:89] "csi-hostpath-attacher-0" [4809ca99-ca91-4c73-95df-add141ce5e05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:59:49.970626  135984 system_pods.go:89] "csi-hostpath-resizer-0" [eae06236-c3ea-434c-8db2-116c1d0e33e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:59:49.970631  135984 system_pods.go:89] "csi-hostpathplugin-qqwps" [9dca2d2e-2ba5-4f16-9edd-dd822f74bc8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:59:49.970635  135984 system_pods.go:89] "etcd-addons-222746" [d7a03d58-c2bc-4461-abc5-1fbc5b24a5e8] Running
	I1018 08:59:49.970639  135984 system_pods.go:89] "kindnet-lxcvf" [2b46724b-6509-48f7-b259-a92095b8770c] Running
	I1018 08:59:49.970642  135984 system_pods.go:89] "kube-apiserver-addons-222746" [fd8775d1-53b8-4c5d-8208-9e68f50abd7a] Running
	I1018 08:59:49.970646  135984 system_pods.go:89] "kube-controller-manager-addons-222746" [88a8b9a0-7528-497e-bf04-7eafcfa42bd7] Running
	I1018 08:59:49.970652  135984 system_pods.go:89] "kube-ingress-dns-minikube" [65ce4998-62ad-4210-b58f-fb087d2c1c73] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:59:49.970656  135984 system_pods.go:89] "kube-proxy-pcfd2" [93edc2e1-f72d-4a33-bb10-8411bcd53919] Running
	I1018 08:59:49.970660  135984 system_pods.go:89] "kube-scheduler-addons-222746" [d75e1da5-9232-4c2b-88ca-6ce5f3617049] Running
	I1018 08:59:49.970666  135984 system_pods.go:89] "metrics-server-85b7d694d7-54dxd" [e6866e3c-67ee-41f0-998a-96a4683c915e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:59:49.970675  135984 system_pods.go:89] "nvidia-device-plugin-daemonset-bmgjg" [2fef3cca-f68a-4cf3-9e0c-1d408f959feb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:59:49.970680  135984 system_pods.go:89] "registry-6b586f9694-72mcl" [096e2907-bbfb-40af-a6e6-f38e2622bacc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:59:49.970685  135984 system_pods.go:89] "registry-creds-764b6fb674-pmfcj" [0e6a1e2c-6788-4106-9ccc-28d1199b2ffe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:59:49.970691  135984 system_pods.go:89] "registry-proxy-cmg9n" [7c8194e0-87ca-4b4e-829a-1096598061ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:59:49.970696  135984 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fg66r" [ed678d90-2372-472e-80ff-a9164446ebe0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:59:49.970704  135984 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mnxz4" [b866bd05-e3a1-443c-9e26-ff9afb7fd032] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:59:49.970708  135984 system_pods.go:89] "storage-provisioner" [faa9b206-8596-40b1-b74a-19d06417800c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:59:49.970724  135984 retry.go:31] will retry after 357.656123ms: missing components: kube-dns
	I1018 08:59:50.051762  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:50.120267  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:50.120343  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:50.274934  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:50.332797  135984 system_pods.go:86] 20 kube-system pods found
	I1018 08:59:50.332841  135984 system_pods.go:89] "amd-gpu-device-plugin-mcrsn" [24d32363-f3e5-407a-8e70-71806f45792c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:59:50.332851  135984 system_pods.go:89] "coredns-66bc5c9577-x2kv4" [5b9a4bb9-972b-4617-8a14-16ede824ef25] Running
	I1018 08:59:50.332863  135984 system_pods.go:89] "csi-hostpath-attacher-0" [4809ca99-ca91-4c73-95df-add141ce5e05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:59:50.332873  135984 system_pods.go:89] "csi-hostpath-resizer-0" [eae06236-c3ea-434c-8db2-116c1d0e33e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:59:50.332879  135984 system_pods.go:89] "csi-hostpathplugin-qqwps" [9dca2d2e-2ba5-4f16-9edd-dd822f74bc8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:59:50.332883  135984 system_pods.go:89] "etcd-addons-222746" [d7a03d58-c2bc-4461-abc5-1fbc5b24a5e8] Running
	I1018 08:59:50.332887  135984 system_pods.go:89] "kindnet-lxcvf" [2b46724b-6509-48f7-b259-a92095b8770c] Running
	I1018 08:59:50.332890  135984 system_pods.go:89] "kube-apiserver-addons-222746" [fd8775d1-53b8-4c5d-8208-9e68f50abd7a] Running
	I1018 08:59:50.332897  135984 system_pods.go:89] "kube-controller-manager-addons-222746" [88a8b9a0-7528-497e-bf04-7eafcfa42bd7] Running
	I1018 08:59:50.332904  135984 system_pods.go:89] "kube-ingress-dns-minikube" [65ce4998-62ad-4210-b58f-fb087d2c1c73] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:59:50.332911  135984 system_pods.go:89] "kube-proxy-pcfd2" [93edc2e1-f72d-4a33-bb10-8411bcd53919] Running
	I1018 08:59:50.332915  135984 system_pods.go:89] "kube-scheduler-addons-222746" [d75e1da5-9232-4c2b-88ca-6ce5f3617049] Running
	I1018 08:59:50.332919  135984 system_pods.go:89] "metrics-server-85b7d694d7-54dxd" [e6866e3c-67ee-41f0-998a-96a4683c915e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:59:50.332927  135984 system_pods.go:89] "nvidia-device-plugin-daemonset-bmgjg" [2fef3cca-f68a-4cf3-9e0c-1d408f959feb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:59:50.332932  135984 system_pods.go:89] "registry-6b586f9694-72mcl" [096e2907-bbfb-40af-a6e6-f38e2622bacc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:59:50.332948  135984 system_pods.go:89] "registry-creds-764b6fb674-pmfcj" [0e6a1e2c-6788-4106-9ccc-28d1199b2ffe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:59:50.332962  135984 system_pods.go:89] "registry-proxy-cmg9n" [7c8194e0-87ca-4b4e-829a-1096598061ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:59:50.332975  135984 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fg66r" [ed678d90-2372-472e-80ff-a9164446ebe0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:59:50.332987  135984 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mnxz4" [b866bd05-e3a1-443c-9e26-ff9afb7fd032] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:59:50.332995  135984 system_pods.go:89] "storage-provisioner" [faa9b206-8596-40b1-b74a-19d06417800c] Running
	I1018 08:59:50.333005  135984 system_pods.go:126] duration metric: took 669.756244ms to wait for k8s-apps to be running ...
	I1018 08:59:50.333015  135984 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 08:59:50.333066  135984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:59:50.345503  135984 system_svc.go:56] duration metric: took 12.476587ms WaitForService to wait for kubelet
	I1018 08:59:50.345532  135984 kubeadm.go:586] duration metric: took 42.791874637s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:59:50.345553  135984 node_conditions.go:102] verifying NodePressure condition ...
	I1018 08:59:50.347974  135984 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 08:59:50.348000  135984 node_conditions.go:123] node cpu capacity is 8
	I1018 08:59:50.348020  135984 node_conditions.go:105] duration metric: took 2.462246ms to run NodePressure ...
	I1018 08:59:50.348034  135984 start.go:241] waiting for startup goroutines ...
	I1018 08:59:50.552055  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:50.620484  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:50.620515  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:50.774968  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:51.052364  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:51.121086  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:51.121127  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:51.275121  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:51.551578  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:51.620167  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:51.620204  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:51.775028  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:52.052370  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:52.119961  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:52.120021  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:52.274788  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:52.552169  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:52.619868  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:52.620939  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:52.774511  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:53.051562  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:53.119878  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:53.119928  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:53.274303  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:53.550885  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:53.620448  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:53.620458  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:53.774901  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:54.052165  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:54.120121  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:54.120191  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:54.275274  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:54.553635  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:54.620125  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:54.620153  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:54.774995  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:55.052266  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:55.153335  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:55.153381  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:55.275152  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:55.551274  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:55.619845  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:55.619894  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:55.774291  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:56.052309  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:56.122164  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:56.122562  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:56.276288  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:56.552236  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:56.621554  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:56.623038  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:56.774920  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:57.052488  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:57.120432  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:57.120437  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:57.275651  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:57.551815  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:57.620638  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:57.620984  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:57.775049  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:58.114288  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:58.119405  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:58.119421  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:58.274926  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:58.552149  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:58.620982  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:58.621015  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:58.774445  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:59.051570  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:59.120474  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:59.120816  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:59.275291  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:59.551669  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:59.620649  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:59.620650  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:59.774747  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:00.164924  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:00.164984  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:00.165068  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:00.312456  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:00.632158  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:00.632161  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:00.632242  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:00.876320  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:01.124922  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:01.124994  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:01.125305  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:01.275052  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:01.552151  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:01.620580  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:01.620632  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:01.775220  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:01.932451  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:00:02.051509  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:02.119994  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:02.120082  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:02.275059  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:00:02.489032  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:00:02.489069  135984 retry.go:31] will retry after 30.499499181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:00:02.551782  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:02.620209  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:02.620350  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:02.775026  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:03.052196  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:03.119866  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:03.120006  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:03.274484  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:03.551403  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:03.619893  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:03.619926  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:03.774492  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:04.051545  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:04.120270  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:04.120334  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:04.275070  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:04.551991  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:04.620235  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:04.620436  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:04.774887  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:05.051919  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:05.120315  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:05.120473  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:05.275276  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:05.551608  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:05.620145  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:05.620186  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:05.774634  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:06.051480  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:06.120325  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:06.120360  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:06.275491  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:06.551818  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:06.620777  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:06.620845  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:06.775020  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:07.052177  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:07.119970  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:07.120049  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:07.274673  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:07.552057  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:07.620860  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:07.620962  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:07.774619  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:08.051664  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:08.120525  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:08.120758  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:08.274761  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:08.551939  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:08.620694  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:08.620891  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:08.774587  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:09.052273  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:09.120724  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:09.120720  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:09.275910  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:09.552288  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:09.619960  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:09.620001  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:09.774487  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:10.051584  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:10.120892  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:10.121010  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:10.275097  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:10.551431  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:10.619811  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:10.619876  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:10.774584  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:11.051994  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:11.120694  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:11.120754  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:11.274437  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:11.551372  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:11.619921  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:11.619958  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:11.774972  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:12.051736  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:12.152028  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:12.152052  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:12.274851  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:12.552730  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:12.620305  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:12.620383  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:12.775046  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:13.052707  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:13.133520  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:13.133863  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:13.275613  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:13.551804  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:13.620914  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:13.620948  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:13.774864  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:14.052014  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:14.120573  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:14.120616  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:14.275258  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:14.551490  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:14.620485  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:14.620498  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:14.774946  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:15.051792  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:15.120476  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:15.120623  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:15.275202  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:15.551511  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:15.620192  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:15.620307  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:15.774882  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:16.051938  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:16.120569  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:16.120687  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:16.275298  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:16.551365  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:16.619965  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:16.620120  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:16.774731  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:17.051599  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:17.120384  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:17.120545  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:17.274853  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:17.605421  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:17.619714  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:17.619951  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:17.774438  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:18.051783  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:18.151845  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:18.151929  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:18.274405  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:18.551516  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:18.620283  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:18.620321  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:18.775072  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:19.052532  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:19.120303  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:19.120476  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:19.274960  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:19.552494  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:19.619960  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:19.620138  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:19.774509  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:20.051384  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:20.120109  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:20.120234  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:20.275282  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:20.552392  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:20.620348  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:20.620419  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:20.775095  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:21.052340  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:21.119753  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:21.119960  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:21.274608  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:21.551570  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:21.620110  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:21.620164  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:21.774803  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:22.051415  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:22.152473  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:22.152524  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:22.275226  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:22.552269  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:22.619851  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:22.619907  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:22.774571  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:23.052321  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:23.122451  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:23.122854  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:23.275207  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:23.552593  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:23.621557  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:23.621854  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:23.776066  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:24.052065  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:24.121338  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:24.121395  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:24.276242  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:24.551685  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:24.620915  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:24.620969  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:24.775622  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:25.052405  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:25.120280  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:25.120340  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:25.275225  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:25.551173  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:25.621094  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:25.621240  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:25.775134  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:26.052427  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:26.120205  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:26.120238  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:26.275118  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:26.552455  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:26.620389  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:26.620389  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:26.775344  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:27.051945  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:27.120767  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:27.120903  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:27.274771  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:27.552165  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:27.620926  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:27.621028  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:27.775094  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:28.052515  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:28.152792  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:28.152900  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:28.274649  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:28.551954  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:28.620716  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:28.620878  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:28.774152  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:29.052575  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:29.120531  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:29.120690  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:29.274560  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:29.552659  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:29.620399  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:29.620426  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:29.774934  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:30.052057  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:30.120861  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:30.121033  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:30.275046  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:30.553098  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:30.620789  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:30.620983  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:30.774777  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:31.052222  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:31.153245  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:31.153272  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:31.275155  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:31.552460  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:31.620765  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:31.620848  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:31.774486  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:32.051148  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:32.151642  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:32.151675  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:32.274956  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:32.552296  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:32.620029  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:32.620125  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:32.774949  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:32.989072  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:00:33.053011  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:33.120759  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:33.121131  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:33.277846  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:33.551763  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:33.620232  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:33.620394  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 09:00:33.668227  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:00:33.668257  135984 retry.go:31] will retry after 34.029741282s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:00:33.775258  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:34.051493  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:34.120113  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:34.120139  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:34.274928  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:34.552188  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:34.621038  135984 kapi.go:107] duration metric: took 1m25.504049422s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 09:00:34.621273  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:34.775374  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:35.053088  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:35.120572  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:35.275169  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:35.552079  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:35.620794  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:35.774466  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:36.051012  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:36.120450  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:36.309032  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:36.552685  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:36.620396  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:36.775136  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:37.053032  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:37.120355  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:37.275677  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:37.551907  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:37.621475  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:37.774888  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:38.052350  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:38.120265  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:38.275136  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:38.553076  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:38.620566  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:38.775252  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:39.051515  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:39.120347  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:39.275233  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:39.551802  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:39.620378  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:39.774850  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:40.052017  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:40.120368  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:40.274951  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:40.552478  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:40.621310  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:40.775363  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:41.051140  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:41.120629  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:41.274010  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:41.552263  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:41.619906  135984 kapi.go:107] duration metric: took 1m32.502923373s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 09:00:41.774453  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:42.064169  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:42.274674  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:42.552176  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:42.774971  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:43.052993  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:43.274930  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:43.552107  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:43.774843  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:44.052672  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:44.275026  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:44.552360  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:44.775122  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:45.052491  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:45.275227  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:45.551772  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:45.774444  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:46.053008  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:46.274735  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:46.552635  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:46.774471  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:47.051350  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:47.275036  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:47.552725  135984 kapi.go:107] duration metric: took 1m38.004427241s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 09:00:47.774942  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:48.274806  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:48.774882  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:49.275089  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:49.774901  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:50.275049  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:50.774663  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:51.274778  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:51.774486  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:52.274610  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:52.775199  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:53.275516  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:53.775138  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:54.274344  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:54.774739  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:55.275348  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:55.774652  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:56.274433  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:56.774280  135984 kapi.go:107] duration metric: took 1m40.502752737s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 09:00:56.775945  135984 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-222746 cluster.
	I1018 09:00:56.777117  135984 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 09:00:56.778026  135984 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 09:01:07.698205  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 09:01:08.240178  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 09:01:08.240315  135984 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 09:01:08.242730  135984 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, ingress-dns, nvidia-device-plugin, registry-creds, storage-provisioner-rancher, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1018 09:01:08.243911  135984 addons.go:514] duration metric: took 2m0.690316433s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner ingress-dns nvidia-device-plugin registry-creds storage-provisioner-rancher cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1018 09:01:08.243967  135984 start.go:246] waiting for cluster config update ...
	I1018 09:01:08.243994  135984 start.go:255] writing updated cluster config ...
	I1018 09:01:08.244249  135984 ssh_runner.go:195] Run: rm -f paused
	I1018 09:01:08.248064  135984 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:01:08.251617  135984 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x2kv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:08.255374  135984 pod_ready.go:94] pod "coredns-66bc5c9577-x2kv4" is "Ready"
	I1018 09:01:08.255400  135984 pod_ready.go:86] duration metric: took 3.759711ms for pod "coredns-66bc5c9577-x2kv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:08.257353  135984 pod_ready.go:83] waiting for pod "etcd-addons-222746" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:08.260769  135984 pod_ready.go:94] pod "etcd-addons-222746" is "Ready"
	I1018 09:01:08.260790  135984 pod_ready.go:86] duration metric: took 3.418985ms for pod "etcd-addons-222746" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:08.262759  135984 pod_ready.go:83] waiting for pod "kube-apiserver-addons-222746" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:08.266034  135984 pod_ready.go:94] pod "kube-apiserver-addons-222746" is "Ready"
	I1018 09:01:08.266054  135984 pod_ready.go:86] duration metric: took 3.275246ms for pod "kube-apiserver-addons-222746" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:08.267618  135984 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-222746" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:08.652025  135984 pod_ready.go:94] pod "kube-controller-manager-addons-222746" is "Ready"
	I1018 09:01:08.652059  135984 pod_ready.go:86] duration metric: took 384.421132ms for pod "kube-controller-manager-addons-222746" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:08.852923  135984 pod_ready.go:83] waiting for pod "kube-proxy-pcfd2" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:09.251435  135984 pod_ready.go:94] pod "kube-proxy-pcfd2" is "Ready"
	I1018 09:01:09.251468  135984 pod_ready.go:86] duration metric: took 398.496243ms for pod "kube-proxy-pcfd2" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:09.452332  135984 pod_ready.go:83] waiting for pod "kube-scheduler-addons-222746" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:09.851747  135984 pod_ready.go:94] pod "kube-scheduler-addons-222746" is "Ready"
	I1018 09:01:09.851777  135984 pod_ready.go:86] duration metric: took 399.41554ms for pod "kube-scheduler-addons-222746" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:09.851793  135984 pod_ready.go:40] duration metric: took 1.603694979s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:01:09.895295  135984 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:01:09.896972  135984 out.go:179] * Done! kubectl is now configured to use "addons-222746" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:02:06 addons-222746 crio[772]: time="2025-10-18T09:02:06.509313724Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-pmfcj/registry-creds" id=f7b539aa-21ea-4801-86cb-b30187bb0f69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:02:06 addons-222746 crio[772]: time="2025-10-18T09:02:06.510069186Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:02:06 addons-222746 crio[772]: time="2025-10-18T09:02:06.515379229Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:02:06 addons-222746 crio[772]: time="2025-10-18T09:02:06.515869607Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:02:06 addons-222746 crio[772]: time="2025-10-18T09:02:06.545000502Z" level=info msg="Created container aca0998ebc0a92a2837dedf45dfc9f8b9da65da13118d34dab1769c1d77a9f7a: kube-system/registry-creds-764b6fb674-pmfcj/registry-creds" id=f7b539aa-21ea-4801-86cb-b30187bb0f69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:02:06 addons-222746 crio[772]: time="2025-10-18T09:02:06.545499712Z" level=info msg="Starting container: aca0998ebc0a92a2837dedf45dfc9f8b9da65da13118d34dab1769c1d77a9f7a" id=87d33d18-0aa6-411b-9e3e-8c63c0618aea name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:02:06 addons-222746 crio[772]: time="2025-10-18T09:02:06.547185872Z" level=info msg="Started container" PID=9002 containerID=aca0998ebc0a92a2837dedf45dfc9f8b9da65da13118d34dab1769c1d77a9f7a description=kube-system/registry-creds-764b6fb674-pmfcj/registry-creds id=87d33d18-0aa6-411b-9e3e-8c63c0618aea name=/runtime.v1.RuntimeService/StartContainer sandboxID=986a245ed405b9c703d365b29cd21cdab8f5ee7e2029be2b3e111729b74b2ec3
	Oct 18 09:03:03 addons-222746 crio[772]: time="2025-10-18T09:03:03.024079209Z" level=info msg="Stopping pod sandbox: 6ed9d02ef8ae840bb4f6b119be8749d6388890448aa90ac7d95452fd83283989" id=977b6925-c910-41dd-a4e2-909ea8e1ede2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 09:03:03 addons-222746 crio[772]: time="2025-10-18T09:03:03.024133887Z" level=info msg="Stopped pod sandbox (already stopped): 6ed9d02ef8ae840bb4f6b119be8749d6388890448aa90ac7d95452fd83283989" id=977b6925-c910-41dd-a4e2-909ea8e1ede2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 09:03:03 addons-222746 crio[772]: time="2025-10-18T09:03:03.024406754Z" level=info msg="Removing pod sandbox: 6ed9d02ef8ae840bb4f6b119be8749d6388890448aa90ac7d95452fd83283989" id=2ae3c8a2-d94a-447f-b8f9-0ce1e96ed33f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 09:03:03 addons-222746 crio[772]: time="2025-10-18T09:03:03.028501569Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:03:03 addons-222746 crio[772]: time="2025-10-18T09:03:03.028556292Z" level=info msg="Removed pod sandbox: 6ed9d02ef8ae840bb4f6b119be8749d6388890448aa90ac7d95452fd83283989" id=2ae3c8a2-d94a-447f-b8f9-0ce1e96ed33f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 09:03:59 addons-222746 crio[772]: time="2025-10-18T09:03:59.424996795Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-9hmrb/POD" id=cb1b46d6-415a-401c-b1d0-7c59b372d7a9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:03:59 addons-222746 crio[772]: time="2025-10-18T09:03:59.425083384Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:03:59 addons-222746 crio[772]: time="2025-10-18T09:03:59.431327951Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-9hmrb Namespace:default ID:e8b436f600fa73a83b617157e73822738e95a8770ed2469addcc0a7926d77a45 UID:b24c034a-1c66-4d9d-9db1-75e9b3771f60 NetNS:/var/run/netns/bdb1730d-d271-450a-8171-e2268e295076 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0000ac7f8}] Aliases:map[]}"
	Oct 18 09:03:59 addons-222746 crio[772]: time="2025-10-18T09:03:59.431354714Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-9hmrb to CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:03:59 addons-222746 crio[772]: time="2025-10-18T09:03:59.441852673Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-9hmrb Namespace:default ID:e8b436f600fa73a83b617157e73822738e95a8770ed2469addcc0a7926d77a45 UID:b24c034a-1c66-4d9d-9db1-75e9b3771f60 NetNS:/var/run/netns/bdb1730d-d271-450a-8171-e2268e295076 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0000ac7f8}] Aliases:map[]}"
	Oct 18 09:03:59 addons-222746 crio[772]: time="2025-10-18T09:03:59.441984269Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-9hmrb for CNI network kindnet (type=ptp)"
	Oct 18 09:03:59 addons-222746 crio[772]: time="2025-10-18T09:03:59.449305914Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:03:59 addons-222746 crio[772]: time="2025-10-18T09:03:59.450097033Z" level=info msg="Ran pod sandbox e8b436f600fa73a83b617157e73822738e95a8770ed2469addcc0a7926d77a45 with infra container: default/hello-world-app-5d498dc89-9hmrb/POD" id=cb1b46d6-415a-401c-b1d0-7c59b372d7a9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:03:59 addons-222746 crio[772]: time="2025-10-18T09:03:59.45129741Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=3c5ee113-d9d4-46e3-a496-4ad7d93303e9 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:03:59 addons-222746 crio[772]: time="2025-10-18T09:03:59.451455073Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=3c5ee113-d9d4-46e3-a496-4ad7d93303e9 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:03:59 addons-222746 crio[772]: time="2025-10-18T09:03:59.451489419Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=3c5ee113-d9d4-46e3-a496-4ad7d93303e9 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:03:59 addons-222746 crio[772]: time="2025-10-18T09:03:59.452040483Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=6ff6a41d-2a17-4fce-a4c9-5c46d7e9f735 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:03:59 addons-222746 crio[772]: time="2025-10-18T09:03:59.461483393Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	aca0998ebc0a9       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   986a245ed405b       registry-creds-764b6fb674-pmfcj             kube-system
	80e522e8c9e29       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago        Running             nginx                                    0                   9d83fc92b82c7       nginx                                       default
	c9cf2f6523ffc       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   cd66c285fc052       busybox                                     default
	713078a1d87ab       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago        Running             gcp-auth                                 0                   70cd7615709b8       gcp-auth-78565c9fb4-9z7q6                   gcp-auth
	3f1e0ab974c3a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago        Running             csi-snapshotter                          0                   df7180f699ba2       csi-hostpathplugin-qqwps                    kube-system
	79c91ae766bdc       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago        Running             csi-provisioner                          0                   df7180f699ba2       csi-hostpathplugin-qqwps                    kube-system
	2006d0829aa98       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago        Running             liveness-probe                           0                   df7180f699ba2       csi-hostpathplugin-qqwps                    kube-system
	da6e806e056d4       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago        Running             hostpath                                 0                   df7180f699ba2       csi-hostpathplugin-qqwps                    kube-system
	a4fd53616bc11       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago        Running             node-driver-registrar                    0                   df7180f699ba2       csi-hostpathplugin-qqwps                    kube-system
	b9be2f644afa5       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago        Running             controller                               0                   df3f2454b3603       ingress-nginx-controller-675c5ddd98-hvm5h   ingress-nginx
	52255095f8932       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago        Running             gadget                                   0                   a0caedd357fb0       gadget-7pfdj                                gadget
	edce1d10c783f       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   a9f67c9dd7ec8       registry-proxy-cmg9n                        kube-system
	cc9e7bafa8a6c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   3794d4f3b0255       snapshot-controller-7d9fbc56b8-fg66r        kube-system
	0f74c115de3ce       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   df7180f699ba2       csi-hostpathplugin-qqwps                    kube-system
	1267812961fa3       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   2d9b6ece5646b       nvidia-device-plugin-daemonset-bmgjg        kube-system
	1c12fcfd58686       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   109b10224296e       amd-gpu-device-plugin-mcrsn                 kube-system
	13543c0f3dca2       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   5daf5bd9730b1       csi-hostpath-resizer-0                      kube-system
	fe7a994e6964d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago        Exited              patch                                    0                   b98a446e595c7       ingress-nginx-admission-patch-5jjnn         ingress-nginx
	4bf2327b6d921       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   9a163fb9ac1b8       csi-hostpath-attacher-0                     kube-system
	3c83994993aa0       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   1d6f37c793c35       snapshot-controller-7d9fbc56b8-mnxz4        kube-system
	70de5b6959505       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago        Exited              create                                   0                   03480266670ee       ingress-nginx-admission-create-2kfb4        ingress-nginx
	430460fa55c77       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   d044509ef50ec       metrics-server-85b7d694d7-54dxd             kube-system
	dbca38ce17214       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago        Running             cloud-spanner-emulator                   0                   ddf65e157517d       cloud-spanner-emulator-86bd5cbb97-s6s56     default
	e14e163f8bfc4       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   c46bb569eb9e9       local-path-provisioner-648f6765c9-k7dw9     local-path-storage
	910f4bbb59848       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   0df4aaba65fe4       kube-ingress-dns-minikube                   kube-system
	ebae0d10fe53d       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           4 minutes ago        Running             registry                                 0                   b03e7ef28e583       registry-6b586f9694-72mcl                   kube-system
	b0ebf5a6f8628       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              4 minutes ago        Running             yakd                                     0                   071450d6c9dee       yakd-dashboard-5ff678cb9-2vdz9              yakd-dashboard
	703f4d898ac52       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             4 minutes ago        Running             coredns                                  0                   375dbf8dacc1f       coredns-66bc5c9577-x2kv4                    kube-system
	d058db45cb842       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago        Running             storage-provisioner                      0                   27ed68dec04f3       storage-provisioner                         kube-system
	d3608cbd20f63       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago        Running             kindnet-cni                              0                   86cec15cc10d8       kindnet-lxcvf                               kube-system
	2026a4d802754       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago        Running             kube-proxy                               0                   732a5f671d774       kube-proxy-pcfd2                            kube-system
	976f8ced94e7b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             5 minutes ago        Running             kube-controller-manager                  0                   ef1a81f013fce       kube-controller-manager-addons-222746       kube-system
	c2f5337233ca0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             5 minutes ago        Running             kube-apiserver                           0                   1ea5156282dc0       kube-apiserver-addons-222746                kube-system
	fce4b4ac493ec       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             5 minutes ago        Running             kube-scheduler                           0                   4af2070922844       kube-scheduler-addons-222746                kube-system
	179aeead4dbf5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             5 minutes ago        Running             etcd                                     0                   aadc14daf1b57       etcd-addons-222746                          kube-system
	
	
	==> coredns [703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e] <==
	[INFO] 10.244.0.22:51352 - 11167 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006772922s
	[INFO] 10.244.0.22:58709 - 60983 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005840989s
	[INFO] 10.244.0.22:58707 - 21956 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006377774s
	[INFO] 10.244.0.22:50057 - 34233 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00433407s
	[INFO] 10.244.0.22:52291 - 57953 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006008205s
	[INFO] 10.244.0.22:42173 - 26500 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001867948s
	[INFO] 10.244.0.22:39812 - 53887 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.002021191s
	[INFO] 10.244.0.27:44480 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000245281s
	[INFO] 10.244.0.27:46324 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000161917s
	[INFO] 10.244.0.31:50929 - 32915 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.0001907s
	[INFO] 10.244.0.31:47306 - 43066 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000290288s
	[INFO] 10.244.0.31:38713 - 2921 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000150052s
	[INFO] 10.244.0.31:48511 - 2781 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000174394s
	[INFO] 10.244.0.31:49062 - 33181 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000094477s
	[INFO] 10.244.0.31:58045 - 13134 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000138949s
	[INFO] 10.244.0.31:36209 - 27087 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.005905685s
	[INFO] 10.244.0.31:55796 - 35454 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.006575538s
	[INFO] 10.244.0.31:46668 - 38764 "A IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.006238762s
	[INFO] 10.244.0.31:48806 - 26283 "AAAA IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.006332797s
	[INFO] 10.244.0.31:47918 - 45791 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004521304s
	[INFO] 10.244.0.31:40329 - 61675 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005350556s
	[INFO] 10.244.0.31:41563 - 9665 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.00440534s
	[INFO] 10.244.0.31:38928 - 24317 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.00476876s
	[INFO] 10.244.0.31:33243 - 9297 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001823617s
	[INFO] 10.244.0.31:38866 - 48041 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001914759s
	
	
	==> describe nodes <==
	Name:               addons-222746
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-222746
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=addons-222746
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T08_59_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-222746
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-222746"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 08:59:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-222746
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:03:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:02:37 +0000   Sat, 18 Oct 2025 08:58:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:02:37 +0000   Sat, 18 Oct 2025 08:58:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:02:37 +0000   Sat, 18 Oct 2025 08:58:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:02:37 +0000   Sat, 18 Oct 2025 08:59:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-222746
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                2ff719aa-4e75-48be-b689-a480c6c5bd53
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m50s
	  default                     cloud-spanner-emulator-86bd5cbb97-s6s56      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  default                     hello-world-app-5d498dc89-9hmrb              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-7pfdj                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  gcp-auth                    gcp-auth-78565c9fb4-9z7q6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-hvm5h    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m51s
	  kube-system                 amd-gpu-device-plugin-mcrsn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 coredns-66bc5c9577-x2kv4                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m52s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 csi-hostpathplugin-qqwps                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 etcd-addons-222746                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m57s
	  kube-system                 kindnet-lxcvf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m52s
	  kube-system                 kube-apiserver-addons-222746                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-controller-manager-addons-222746        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-proxy-pcfd2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-scheduler-addons-222746                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 metrics-server-85b7d694d7-54dxd              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m52s
	  kube-system                 nvidia-device-plugin-daemonset-bmgjg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 registry-6b586f9694-72mcl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 registry-creds-764b6fb674-pmfcj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 registry-proxy-cmg9n                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 snapshot-controller-7d9fbc56b8-fg66r         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 snapshot-controller-7d9fbc56b8-mnxz4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  local-path-storage          local-path-provisioner-648f6765c9-k7dw9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-2vdz9               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m51s                kube-proxy       
	  Normal  Starting                 5m2s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m2s (x8 over 5m2s)  kubelet          Node addons-222746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m2s (x8 over 5m2s)  kubelet          Node addons-222746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m2s (x8 over 5m2s)  kubelet          Node addons-222746 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m57s                kubelet          Node addons-222746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m57s                kubelet          Node addons-222746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m57s                kubelet          Node addons-222746 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m53s                node-controller  Node addons-222746 event: Registered Node addons-222746 in Controller
	  Normal  NodeReady                4m11s                kubelet          Node addons-222746 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4] <==
	{"level":"warn","ts":"2025-10-18T08:59:00.065665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:00.072589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:00.078643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:00.085438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:00.091250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:00.101717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:00.108527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:00.115761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:00.161998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:09.936246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:09.943314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:37.043818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:37.050153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:37.067025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:37.073412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60664","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:00:00.162737Z","caller":"traceutil/trace.go:172","msg":"trace[1548306592] linearizableReadLoop","detail":"{readStateIndex:1027; appliedIndex:1027; }","duration":"112.250124ms","start":"2025-10-18T09:00:00.050464Z","end":"2025-10-18T09:00:00.162714Z","steps":["trace[1548306592] 'read index received'  (duration: 112.240779ms)","trace[1548306592] 'applied index is now lower than readState.Index'  (duration: 7.785µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:00:00.162932Z","caller":"traceutil/trace.go:172","msg":"trace[375874786] transaction","detail":"{read_only:false; response_revision:1005; number_of_response:1; }","duration":"223.543871ms","start":"2025-10-18T08:59:59.939364Z","end":"2025-10-18T09:00:00.162908Z","steps":["trace[375874786] 'process raft request'  (duration: 223.384271ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:00:00.162963Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.480776ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T09:00:00.163041Z","caller":"traceutil/trace.go:172","msg":"trace[412720658] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1005; }","duration":"112.572864ms","start":"2025-10-18T09:00:00.050454Z","end":"2025-10-18T09:00:00.163027Z","steps":["trace[412720658] 'agreement among raft nodes before linearized reading'  (duration: 112.434711ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:00:00.630570Z","caller":"traceutil/trace.go:172","msg":"trace[872626465] transaction","detail":"{read_only:false; response_revision:1006; number_of_response:1; }","duration":"126.971909ms","start":"2025-10-18T09:00:00.503578Z","end":"2025-10-18T09:00:00.630550Z","steps":["trace[872626465] 'process raft request'  (duration: 126.865631ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:00:00.875069Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.327333ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-10-18T09:00:00.875101Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.57661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T09:00:00.875141Z","caller":"traceutil/trace.go:172","msg":"trace[683937687] range","detail":"{range_begin:/registry/daemonsets; range_end:; response_count:0; response_revision:1006; }","duration":"119.412098ms","start":"2025-10-18T09:00:00.755713Z","end":"2025-10-18T09:00:00.875125Z","steps":["trace[683937687] 'range keys from in-memory index tree'  (duration: 119.249563ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:00:00.875153Z","caller":"traceutil/trace.go:172","msg":"trace[1235244990] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1006; }","duration":"101.633783ms","start":"2025-10-18T09:00:00.773505Z","end":"2025-10-18T09:00:00.875138Z","steps":["trace[1235244990] 'range keys from in-memory index tree'  (duration: 101.510301ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:00:17.603623Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.902594ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040711672565207 > lease_revoke:<id:70cc99f68b280c8c>","response":"size:29"}
	
	
	==> gcp-auth [713078a1d87ab4685bbcbf8d1d3f5e0074bcda5e9a5a2667fa4c20f0f81d9fc7] <==
	2025/10/18 09:00:55 GCP Auth Webhook started!
	2025/10/18 09:01:10 Ready to marshal response ...
	2025/10/18 09:01:10 Ready to write response ...
	2025/10/18 09:01:10 Ready to marshal response ...
	2025/10/18 09:01:10 Ready to write response ...
	2025/10/18 09:01:10 Ready to marshal response ...
	2025/10/18 09:01:10 Ready to write response ...
	2025/10/18 09:01:19 Ready to marshal response ...
	2025/10/18 09:01:19 Ready to write response ...
	2025/10/18 09:01:19 Ready to marshal response ...
	2025/10/18 09:01:19 Ready to write response ...
	2025/10/18 09:01:28 Ready to marshal response ...
	2025/10/18 09:01:28 Ready to write response ...
	2025/10/18 09:01:29 Ready to marshal response ...
	2025/10/18 09:01:29 Ready to write response ...
	2025/10/18 09:01:35 Ready to marshal response ...
	2025/10/18 09:01:35 Ready to write response ...
	2025/10/18 09:01:38 Ready to marshal response ...
	2025/10/18 09:01:38 Ready to write response ...
	2025/10/18 09:01:56 Ready to marshal response ...
	2025/10/18 09:01:56 Ready to write response ...
	2025/10/18 09:03:59 Ready to marshal response ...
	2025/10/18 09:03:59 Ready to write response ...
	
	
	==> kernel <==
	 09:04:00 up 46 min,  0 user,  load average: 0.27, 0.97, 1.14
	Linux addons-222746 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c] <==
	I1018 09:01:59.065550       1 main.go:301] handling current node
	I1018 09:02:09.064766       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:02:09.064799       1 main.go:301] handling current node
	I1018 09:02:19.065575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:02:19.065615       1 main.go:301] handling current node
	I1018 09:02:29.066956       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:02:29.066989       1 main.go:301] handling current node
	I1018 09:02:39.064893       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:02:39.064920       1 main.go:301] handling current node
	I1018 09:02:49.071193       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:02:49.071223       1 main.go:301] handling current node
	I1018 09:02:59.065560       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:02:59.065589       1 main.go:301] handling current node
	I1018 09:03:09.064904       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:03:09.064931       1 main.go:301] handling current node
	I1018 09:03:19.070894       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:03:19.070935       1 main.go:301] handling current node
	I1018 09:03:29.073995       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:03:29.074023       1 main.go:301] handling current node
	I1018 09:03:39.064770       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:03:39.064848       1 main.go:301] handling current node
	I1018 09:03:49.071290       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:03:49.071319       1 main.go:301] handling current node
	I1018 09:03:59.073315       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:03:59.073344       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711] <==
	E1018 09:00:11.145844       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.218.49:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.218.49:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.218.49:443: connect: connection refused" logger="UnhandledError"
	E1018 09:00:11.147610       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.218.49:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.218.49:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.218.49:443: connect: connection refused" logger="UnhandledError"
	E1018 09:00:11.153038       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.218.49:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.218.49:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.218.49:443: connect: connection refused" logger="UnhandledError"
	W1018 09:00:12.146174       1 handler_proxy.go:99] no RequestInfo found in the context
	W1018 09:00:12.146209       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 09:00:12.146214       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1018 09:00:12.146232       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1018 09:00:12.146281       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 09:00:12.147404       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1018 09:00:16.179263       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 09:00:16.179316       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1018 09:00:16.179329       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.218.49:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.218.49:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1018 09:00:16.187172       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 09:01:18.558652       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46284: use of closed network connection
	E1018 09:01:18.706662       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46308: use of closed network connection
	I1018 09:01:35.490892       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1018 09:01:35.680661       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.180.222"}
	I1018 09:01:50.072719       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1018 09:03:59.196991       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.121.66"}
	
	
	==> kube-controller-manager [976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb] <==
	I1018 08:59:07.032719       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 08:59:07.032748       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 08:59:07.032795       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 08:59:07.032893       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 08:59:07.032976       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-222746"
	I1018 08:59:07.033477       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 08:59:07.034712       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 08:59:07.034763       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:59:07.034800       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 08:59:07.034874       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 08:59:07.034918       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 08:59:07.034931       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 08:59:07.034938       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 08:59:07.040861       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-222746" podCIDRs=["10.244.0.0/24"]
	I1018 08:59:07.052017       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 08:59:37.038788       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 08:59:37.038975       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 08:59:37.039027       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 08:59:37.057950       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 08:59:37.060899       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 08:59:37.139503       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:59:37.161843       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 08:59:52.038849       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1018 09:00:07.149101       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 09:00:07.169043       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733] <==
	I1018 08:59:08.807014       1 server_linux.go:53] "Using iptables proxy"
	I1018 08:59:08.914439       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 08:59:09.017951       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 08:59:09.018062       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 08:59:09.018161       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 08:59:09.043077       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 08:59:09.043141       1 server_linux.go:132] "Using iptables Proxier"
	I1018 08:59:09.049548       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 08:59:09.050033       1 server.go:527] "Version info" version="v1.34.1"
	I1018 08:59:09.050067       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 08:59:09.051876       1 config.go:200] "Starting service config controller"
	I1018 08:59:09.051900       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 08:59:09.051964       1 config.go:106] "Starting endpoint slice config controller"
	I1018 08:59:09.051986       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 08:59:09.052020       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 08:59:09.052033       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 08:59:09.052062       1 config.go:309] "Starting node config controller"
	I1018 08:59:09.052089       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 08:59:09.052115       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 08:59:09.152593       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 08:59:09.152605       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 08:59:09.152593       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778] <==
	E1018 08:59:00.616885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 08:59:00.617563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 08:59:00.617736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 08:59:00.617787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 08:59:00.618091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 08:59:00.618547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 08:59:00.618705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 08:59:00.618768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 08:59:00.618852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 08:59:00.618866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 08:59:00.618921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 08:59:00.618992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 08:59:00.618996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 08:59:00.619111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 08:59:00.619115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 08:59:00.619125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 08:59:01.438954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 08:59:01.510275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 08:59:01.555615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 08:59:01.560589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 08:59:01.627377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 08:59:01.669363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 08:59:01.702441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 08:59:01.775681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1018 08:59:04.413778       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:02:02 addons-222746 kubelet[1298]: I1018 09:02:02.955888    1298 scope.go:117] "RemoveContainer" containerID="1f3877ab559b704ab4cbec63909c436066de22b54810b80adf423346a4b627ae"
	Oct 18 09:02:02 addons-222746 kubelet[1298]: I1018 09:02:02.964637    1298 scope.go:117] "RemoveContainer" containerID="85c255ae59d1e5328acebabefd501b0acc2e7ca7f163a5ce64aa00d8a0c6df47"
	Oct 18 09:02:02 addons-222746 kubelet[1298]: I1018 09:02:02.974184    1298 scope.go:117] "RemoveContainer" containerID="2ede7d02d14dc944597f1343c3e5141cd8d0682edf4da4e9781efb2ea7b7566f"
	Oct 18 09:02:03 addons-222746 kubelet[1298]: I1018 09:02:03.312977    1298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7f6068ed-1ada-411e-b471-f965920d2240-gcp-creds\") pod \"7f6068ed-1ada-411e-b471-f965920d2240\" (UID: \"7f6068ed-1ada-411e-b471-f965920d2240\") "
	Oct 18 09:02:03 addons-222746 kubelet[1298]: I1018 09:02:03.313033    1298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgbk4\" (UniqueName: \"kubernetes.io/projected/7f6068ed-1ada-411e-b471-f965920d2240-kube-api-access-jgbk4\") pod \"7f6068ed-1ada-411e-b471-f965920d2240\" (UID: \"7f6068ed-1ada-411e-b471-f965920d2240\") "
	Oct 18 09:02:03 addons-222746 kubelet[1298]: I1018 09:02:03.313111    1298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f6068ed-1ada-411e-b471-f965920d2240-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "7f6068ed-1ada-411e-b471-f965920d2240" (UID: "7f6068ed-1ada-411e-b471-f965920d2240"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 18 09:02:03 addons-222746 kubelet[1298]: I1018 09:02:03.313159    1298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^182f1f15-ac01-11f0-8f5b-1a3c103aa517\") pod \"7f6068ed-1ada-411e-b471-f965920d2240\" (UID: \"7f6068ed-1ada-411e-b471-f965920d2240\") "
	Oct 18 09:02:03 addons-222746 kubelet[1298]: I1018 09:02:03.313327    1298 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/7f6068ed-1ada-411e-b471-f965920d2240-gcp-creds\") on node \"addons-222746\" DevicePath \"\""
	Oct 18 09:02:03 addons-222746 kubelet[1298]: I1018 09:02:03.315352    1298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f6068ed-1ada-411e-b471-f965920d2240-kube-api-access-jgbk4" (OuterVolumeSpecName: "kube-api-access-jgbk4") pod "7f6068ed-1ada-411e-b471-f965920d2240" (UID: "7f6068ed-1ada-411e-b471-f965920d2240"). InnerVolumeSpecName "kube-api-access-jgbk4". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 18 09:02:03 addons-222746 kubelet[1298]: I1018 09:02:03.316207    1298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^182f1f15-ac01-11f0-8f5b-1a3c103aa517" (OuterVolumeSpecName: "task-pv-storage") pod "7f6068ed-1ada-411e-b471-f965920d2240" (UID: "7f6068ed-1ada-411e-b471-f965920d2240"). InnerVolumeSpecName "pvc-2171fa35-dc22-4536-a332-7f6fea753acc". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 18 09:02:03 addons-222746 kubelet[1298]: I1018 09:02:03.414049    1298 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jgbk4\" (UniqueName: \"kubernetes.io/projected/7f6068ed-1ada-411e-b471-f965920d2240-kube-api-access-jgbk4\") on node \"addons-222746\" DevicePath \"\""
	Oct 18 09:02:03 addons-222746 kubelet[1298]: I1018 09:02:03.414114    1298 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-2171fa35-dc22-4536-a332-7f6fea753acc\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^182f1f15-ac01-11f0-8f5b-1a3c103aa517\") on node \"addons-222746\" "
	Oct 18 09:02:03 addons-222746 kubelet[1298]: I1018 09:02:03.419000    1298 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-2171fa35-dc22-4536-a332-7f6fea753acc" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^182f1f15-ac01-11f0-8f5b-1a3c103aa517") on node "addons-222746"
	Oct 18 09:02:03 addons-222746 kubelet[1298]: I1018 09:02:03.515408    1298 reconciler_common.go:299] "Volume detached for volume \"pvc-2171fa35-dc22-4536-a332-7f6fea753acc\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^182f1f15-ac01-11f0-8f5b-1a3c103aa517\") on node \"addons-222746\" DevicePath \"\""
	Oct 18 09:02:03 addons-222746 kubelet[1298]: I1018 09:02:03.612878    1298 scope.go:117] "RemoveContainer" containerID="99c21b86383cc5e63eda876a868f7213e01fd2c6abdbbd8b9eda7ca19b233301"
	Oct 18 09:02:03 addons-222746 kubelet[1298]: I1018 09:02:03.621046    1298 scope.go:117] "RemoveContainer" containerID="99c21b86383cc5e63eda876a868f7213e01fd2c6abdbbd8b9eda7ca19b233301"
	Oct 18 09:02:03 addons-222746 kubelet[1298]: E1018 09:02:03.621461    1298 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99c21b86383cc5e63eda876a868f7213e01fd2c6abdbbd8b9eda7ca19b233301\": container with ID starting with 99c21b86383cc5e63eda876a868f7213e01fd2c6abdbbd8b9eda7ca19b233301 not found: ID does not exist" containerID="99c21b86383cc5e63eda876a868f7213e01fd2c6abdbbd8b9eda7ca19b233301"
	Oct 18 09:02:03 addons-222746 kubelet[1298]: I1018 09:02:03.621494    1298 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99c21b86383cc5e63eda876a868f7213e01fd2c6abdbbd8b9eda7ca19b233301"} err="failed to get container status \"99c21b86383cc5e63eda876a868f7213e01fd2c6abdbbd8b9eda7ca19b233301\": rpc error: code = NotFound desc = could not find container \"99c21b86383cc5e63eda876a868f7213e01fd2c6abdbbd8b9eda7ca19b233301\": container with ID starting with 99c21b86383cc5e63eda876a868f7213e01fd2c6abdbbd8b9eda7ca19b233301 not found: ID does not exist"
	Oct 18 09:02:04 addons-222746 kubelet[1298]: I1018 09:02:04.922243    1298 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f6068ed-1ada-411e-b471-f965920d2240" path="/var/lib/kubelet/pods/7f6068ed-1ada-411e-b471-f965920d2240/volumes"
	Oct 18 09:02:06 addons-222746 kubelet[1298]: I1018 09:02:06.639405    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-pmfcj" podStartSLOduration=177.108093239 podStartE2EDuration="2m58.639385783s" podCreationTimestamp="2025-10-18 08:59:08 +0000 UTC" firstStartedPulling="2025-10-18 09:02:04.94118903 +0000 UTC m=+182.106333345" lastFinishedPulling="2025-10-18 09:02:06.472481592 +0000 UTC m=+183.637625889" observedRunningTime="2025-10-18 09:02:06.639190506 +0000 UTC m=+183.804334825" watchObservedRunningTime="2025-10-18 09:02:06.639385783 +0000 UTC m=+183.804530102"
	Oct 18 09:02:49 addons-222746 kubelet[1298]: I1018 09:02:49.919669    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-bmgjg" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 09:02:51 addons-222746 kubelet[1298]: I1018 09:02:51.919602    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-cmg9n" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 09:03:07 addons-222746 kubelet[1298]: I1018 09:03:07.919482    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-mcrsn" secret="" err="secret \"gcp-auth\" not found"
	Oct 18 09:03:59 addons-222746 kubelet[1298]: I1018 09:03:59.204027    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b24c034a-1c66-4d9d-9db1-75e9b3771f60-gcp-creds\") pod \"hello-world-app-5d498dc89-9hmrb\" (UID: \"b24c034a-1c66-4d9d-9db1-75e9b3771f60\") " pod="default/hello-world-app-5d498dc89-9hmrb"
	Oct 18 09:03:59 addons-222746 kubelet[1298]: I1018 09:03:59.204109    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-css8x\" (UniqueName: \"kubernetes.io/projected/b24c034a-1c66-4d9d-9db1-75e9b3771f60-kube-api-access-css8x\") pod \"hello-world-app-5d498dc89-9hmrb\" (UID: \"b24c034a-1c66-4d9d-9db1-75e9b3771f60\") " pod="default/hello-world-app-5d498dc89-9hmrb"
	
	
	==> storage-provisioner [d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a] <==
	W1018 09:03:35.039957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:37.042931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:37.047339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:39.050037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:39.054721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:41.057989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:41.062012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:43.064975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:43.069558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:45.072273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:45.076205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:47.078937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:47.083567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:49.086163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:49.090628       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:51.094336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:51.097986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:53.100553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:53.104075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:55.106890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:55.110434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:57.112942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:57.116354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:59.119296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:03:59.129455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-222746 -n addons-222746
helpers_test.go:269: (dbg) Run:  kubectl --context addons-222746 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-2kfb4 ingress-nginx-admission-patch-5jjnn
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-222746 describe pod ingress-nginx-admission-create-2kfb4 ingress-nginx-admission-patch-5jjnn
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-222746 describe pod ingress-nginx-admission-create-2kfb4 ingress-nginx-admission-patch-5jjnn: exit status 1 (59.976414ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2kfb4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5jjnn" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-222746 describe pod ingress-nginx-admission-create-2kfb4 ingress-nginx-admission-patch-5jjnn: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-222746 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (236.079092ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:04:01.639205  150819 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:04:01.639475  150819 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:04:01.639486  150819 out.go:374] Setting ErrFile to fd 2...
	I1018 09:04:01.639491  150819 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:04:01.639687  150819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:04:01.639968  150819 mustload.go:65] Loading cluster: addons-222746
	I1018 09:04:01.640297  150819 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:04:01.640313  150819 addons.go:606] checking whether the cluster is paused
	I1018 09:04:01.640393  150819 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:04:01.640404  150819 host.go:66] Checking if "addons-222746" exists ...
	I1018 09:04:01.640729  150819 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 09:04:01.658851  150819 ssh_runner.go:195] Run: systemctl --version
	I1018 09:04:01.658918  150819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 09:04:01.678131  150819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 09:04:01.774709  150819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:04:01.774786  150819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:04:01.805034  150819 cri.go:89] found id: "aca0998ebc0a92a2837dedf45dfc9f8b9da65da13118d34dab1769c1d77a9f7a"
	I1018 09:04:01.805071  150819 cri.go:89] found id: "3f1e0ab974c3adb0e7bcba7193dbfcbd5250e283b2c1dcb0f52f3542be969c05"
	I1018 09:04:01.805076  150819 cri.go:89] found id: "79c91ae766bdce515a4eda632ca6ecd5c57a4454b7a97ae58fa3e91e32020d28"
	I1018 09:04:01.805081  150819 cri.go:89] found id: "2006d0829aa9898dfa1eea55359c71e06640d1be7b32092f97e7ec9b89f3ea71"
	I1018 09:04:01.805084  150819 cri.go:89] found id: "da6e806e056d472d4816ed9122b9d99d62c91c582363ba61d34c584d97fda56c"
	I1018 09:04:01.805088  150819 cri.go:89] found id: "a4fd53616bc11869d8845646b763db80942a4e2c50ec48595c4bfdcc5950f898"
	I1018 09:04:01.805091  150819 cri.go:89] found id: "edce1d10c783fda9fcbe60cf570e174a37959936ebce9285eb63101e393cd691"
	I1018 09:04:01.805094  150819 cri.go:89] found id: "cc9e7bafa8a6c0370a3968660871e38749909a319d5191d1d16f707458abcfca"
	I1018 09:04:01.805098  150819 cri.go:89] found id: "0f74c115de3ce217c6a4a1e4f5ee891722f4b28a8e3da1f42cb6f42b7b6a7f23"
	I1018 09:04:01.805106  150819 cri.go:89] found id: "1267812961fa39d2e5a477c40f29fdca6aefbcd66ca083b2b5ab8083ed1f114d"
	I1018 09:04:01.805110  150819 cri.go:89] found id: "1c12fcfd58686f7ebc0bf26390f7f9977bab5450b5c08604e1773c936b620473"
	I1018 09:04:01.805113  150819 cri.go:89] found id: "13543c0f3dca2e5d1884741fc895c7bdd56c7ac061eaaba1fc193a2068c33a5d"
	I1018 09:04:01.805118  150819 cri.go:89] found id: "4bf2327b6d921b2a03b0ad6117b7e519262aa09b660a7aeed6481d1dd4d135be"
	I1018 09:04:01.805122  150819 cri.go:89] found id: "3c83994993aa0d26323e700488ed01d676cb6d662f3d4e2d1dcf544bab3ade13"
	I1018 09:04:01.805127  150819 cri.go:89] found id: "430460fa55c7751bc85693cb537d90a449762549c9ce77554e35e334382aa271"
	I1018 09:04:01.805138  150819 cri.go:89] found id: "910f4bbb59848023f2078cd2cfea1447cb68096d3009bcbbe3a3c1320b6f8dde"
	I1018 09:04:01.805146  150819 cri.go:89] found id: "ebae0d10fe53d742d91554d1abb6efb1544b7edb5ced01f3a8a801486ad51fab"
	I1018 09:04:01.805152  150819 cri.go:89] found id: "703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e"
	I1018 09:04:01.805155  150819 cri.go:89] found id: "d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a"
	I1018 09:04:01.805159  150819 cri.go:89] found id: "d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c"
	I1018 09:04:01.805164  150819 cri.go:89] found id: "2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733"
	I1018 09:04:01.805172  150819 cri.go:89] found id: "976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb"
	I1018 09:04:01.805176  150819 cri.go:89] found id: "c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711"
	I1018 09:04:01.805181  150819 cri.go:89] found id: "fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778"
	I1018 09:04:01.805187  150819 cri.go:89] found id: "179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4"
	I1018 09:04:01.805192  150819 cri.go:89] found id: ""
	I1018 09:04:01.805245  150819 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:04:01.819752  150819 out.go:203] 
	W1018 09:04:01.821040  150819 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:04:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:04:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:04:01.821062  150819 out.go:285] * 
	* 
	W1018 09:04:01.824224  150819 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:04:01.825747  150819 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-222746 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-222746 addons disable ingress --alsologtostderr -v=1: exit status 11 (229.654762ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:04:01.874456  150883 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:04:01.874746  150883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:04:01.874756  150883 out.go:374] Setting ErrFile to fd 2...
	I1018 09:04:01.874763  150883 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:04:01.875029  150883 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:04:01.875321  150883 mustload.go:65] Loading cluster: addons-222746
	I1018 09:04:01.875683  150883 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:04:01.875706  150883 addons.go:606] checking whether the cluster is paused
	I1018 09:04:01.875811  150883 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:04:01.875839  150883 host.go:66] Checking if "addons-222746" exists ...
	I1018 09:04:01.876237  150883 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 09:04:01.894502  150883 ssh_runner.go:195] Run: systemctl --version
	I1018 09:04:01.894550  150883 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 09:04:01.911886  150883 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 09:04:02.006681  150883 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:04:02.006767  150883 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:04:02.035930  150883 cri.go:89] found id: "aca0998ebc0a92a2837dedf45dfc9f8b9da65da13118d34dab1769c1d77a9f7a"
	I1018 09:04:02.035959  150883 cri.go:89] found id: "3f1e0ab974c3adb0e7bcba7193dbfcbd5250e283b2c1dcb0f52f3542be969c05"
	I1018 09:04:02.035965  150883 cri.go:89] found id: "79c91ae766bdce515a4eda632ca6ecd5c57a4454b7a97ae58fa3e91e32020d28"
	I1018 09:04:02.035969  150883 cri.go:89] found id: "2006d0829aa9898dfa1eea55359c71e06640d1be7b32092f97e7ec9b89f3ea71"
	I1018 09:04:02.035973  150883 cri.go:89] found id: "da6e806e056d472d4816ed9122b9d99d62c91c582363ba61d34c584d97fda56c"
	I1018 09:04:02.035977  150883 cri.go:89] found id: "a4fd53616bc11869d8845646b763db80942a4e2c50ec48595c4bfdcc5950f898"
	I1018 09:04:02.035981  150883 cri.go:89] found id: "edce1d10c783fda9fcbe60cf570e174a37959936ebce9285eb63101e393cd691"
	I1018 09:04:02.035985  150883 cri.go:89] found id: "cc9e7bafa8a6c0370a3968660871e38749909a319d5191d1d16f707458abcfca"
	I1018 09:04:02.035989  150883 cri.go:89] found id: "0f74c115de3ce217c6a4a1e4f5ee891722f4b28a8e3da1f42cb6f42b7b6a7f23"
	I1018 09:04:02.035997  150883 cri.go:89] found id: "1267812961fa39d2e5a477c40f29fdca6aefbcd66ca083b2b5ab8083ed1f114d"
	I1018 09:04:02.036001  150883 cri.go:89] found id: "1c12fcfd58686f7ebc0bf26390f7f9977bab5450b5c08604e1773c936b620473"
	I1018 09:04:02.036006  150883 cri.go:89] found id: "13543c0f3dca2e5d1884741fc895c7bdd56c7ac061eaaba1fc193a2068c33a5d"
	I1018 09:04:02.036011  150883 cri.go:89] found id: "4bf2327b6d921b2a03b0ad6117b7e519262aa09b660a7aeed6481d1dd4d135be"
	I1018 09:04:02.036016  150883 cri.go:89] found id: "3c83994993aa0d26323e700488ed01d676cb6d662f3d4e2d1dcf544bab3ade13"
	I1018 09:04:02.036020  150883 cri.go:89] found id: "430460fa55c7751bc85693cb537d90a449762549c9ce77554e35e334382aa271"
	I1018 09:04:02.036028  150883 cri.go:89] found id: "910f4bbb59848023f2078cd2cfea1447cb68096d3009bcbbe3a3c1320b6f8dde"
	I1018 09:04:02.036035  150883 cri.go:89] found id: "ebae0d10fe53d742d91554d1abb6efb1544b7edb5ced01f3a8a801486ad51fab"
	I1018 09:04:02.036041  150883 cri.go:89] found id: "703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e"
	I1018 09:04:02.036044  150883 cri.go:89] found id: "d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a"
	I1018 09:04:02.036047  150883 cri.go:89] found id: "d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c"
	I1018 09:04:02.036051  150883 cri.go:89] found id: "2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733"
	I1018 09:04:02.036057  150883 cri.go:89] found id: "976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb"
	I1018 09:04:02.036062  150883 cri.go:89] found id: "c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711"
	I1018 09:04:02.036067  150883 cri.go:89] found id: "fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778"
	I1018 09:04:02.036072  150883 cri.go:89] found id: "179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4"
	I1018 09:04:02.036079  150883 cri.go:89] found id: ""
	I1018 09:04:02.036124  150883 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:04:02.050385  150883 out.go:203] 
	W1018 09:04:02.051574  150883 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:04:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:04:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:04:02.051599  150883 out.go:285] * 
	* 
	W1018 09:04:02.054790  150883 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:04:02.055932  150883 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-222746 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-7pfdj" [c71119c7-507e-4470-ac5a-38f2c045439f] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003282591s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-222746 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (253.665319ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:01:35.474732  146605 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:01:35.475097  146605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:35.475113  146605 out.go:374] Setting ErrFile to fd 2...
	I1018 09:01:35.475119  146605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:35.475481  146605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:01:35.475891  146605 mustload.go:65] Loading cluster: addons-222746
	I1018 09:01:35.476366  146605 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:35.476387  146605 addons.go:606] checking whether the cluster is paused
	I1018 09:01:35.476520  146605 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:35.476537  146605 host.go:66] Checking if "addons-222746" exists ...
	I1018 09:01:35.477132  146605 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 09:01:35.498388  146605 ssh_runner.go:195] Run: systemctl --version
	I1018 09:01:35.498459  146605 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 09:01:35.520277  146605 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 09:01:35.621107  146605 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:01:35.621196  146605 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:01:35.654914  146605 cri.go:89] found id: "3f1e0ab974c3adb0e7bcba7193dbfcbd5250e283b2c1dcb0f52f3542be969c05"
	I1018 09:01:35.654935  146605 cri.go:89] found id: "79c91ae766bdce515a4eda632ca6ecd5c57a4454b7a97ae58fa3e91e32020d28"
	I1018 09:01:35.654942  146605 cri.go:89] found id: "2006d0829aa9898dfa1eea55359c71e06640d1be7b32092f97e7ec9b89f3ea71"
	I1018 09:01:35.654946  146605 cri.go:89] found id: "da6e806e056d472d4816ed9122b9d99d62c91c582363ba61d34c584d97fda56c"
	I1018 09:01:35.654950  146605 cri.go:89] found id: "a4fd53616bc11869d8845646b763db80942a4e2c50ec48595c4bfdcc5950f898"
	I1018 09:01:35.654954  146605 cri.go:89] found id: "edce1d10c783fda9fcbe60cf570e174a37959936ebce9285eb63101e393cd691"
	I1018 09:01:35.654958  146605 cri.go:89] found id: "cc9e7bafa8a6c0370a3968660871e38749909a319d5191d1d16f707458abcfca"
	I1018 09:01:35.654962  146605 cri.go:89] found id: "0f74c115de3ce217c6a4a1e4f5ee891722f4b28a8e3da1f42cb6f42b7b6a7f23"
	I1018 09:01:35.654965  146605 cri.go:89] found id: "1267812961fa39d2e5a477c40f29fdca6aefbcd66ca083b2b5ab8083ed1f114d"
	I1018 09:01:35.654991  146605 cri.go:89] found id: "1c12fcfd58686f7ebc0bf26390f7f9977bab5450b5c08604e1773c936b620473"
	I1018 09:01:35.655001  146605 cri.go:89] found id: "13543c0f3dca2e5d1884741fc895c7bdd56c7ac061eaaba1fc193a2068c33a5d"
	I1018 09:01:35.655006  146605 cri.go:89] found id: "4bf2327b6d921b2a03b0ad6117b7e519262aa09b660a7aeed6481d1dd4d135be"
	I1018 09:01:35.655010  146605 cri.go:89] found id: "3c83994993aa0d26323e700488ed01d676cb6d662f3d4e2d1dcf544bab3ade13"
	I1018 09:01:35.655014  146605 cri.go:89] found id: "430460fa55c7751bc85693cb537d90a449762549c9ce77554e35e334382aa271"
	I1018 09:01:35.655018  146605 cri.go:89] found id: "910f4bbb59848023f2078cd2cfea1447cb68096d3009bcbbe3a3c1320b6f8dde"
	I1018 09:01:35.655032  146605 cri.go:89] found id: "ebae0d10fe53d742d91554d1abb6efb1544b7edb5ced01f3a8a801486ad51fab"
	I1018 09:01:35.655039  146605 cri.go:89] found id: "703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e"
	I1018 09:01:35.655046  146605 cri.go:89] found id: "d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a"
	I1018 09:01:35.655049  146605 cri.go:89] found id: "d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c"
	I1018 09:01:35.655051  146605 cri.go:89] found id: "2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733"
	I1018 09:01:35.655053  146605 cri.go:89] found id: "976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb"
	I1018 09:01:35.655055  146605 cri.go:89] found id: "c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711"
	I1018 09:01:35.655058  146605 cri.go:89] found id: "fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778"
	I1018 09:01:35.655060  146605 cri.go:89] found id: "179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4"
	I1018 09:01:35.655062  146605 cri.go:89] found id: ""
	I1018 09:01:35.655099  146605 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:01:35.670037  146605 out.go:203] 
	W1018 09:01:35.671294  146605 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:01:35.671317  146605 out.go:285] * 
	* 
	W1018 09:01:35.675298  146605 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:01:35.676453  146605 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-222746 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.270704ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-54dxd" [e6866e3c-67ee-41f0-998a-96a4683c915e] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004963466s
addons_test.go:463: (dbg) Run:  kubectl --context addons-222746 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-222746 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (236.223515ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:01:39.026968  147618 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:01:39.027362  147618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:39.027373  147618 out.go:374] Setting ErrFile to fd 2...
	I1018 09:01:39.027377  147618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:39.027640  147618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:01:39.027992  147618 mustload.go:65] Loading cluster: addons-222746
	I1018 09:01:39.028462  147618 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:39.028486  147618 addons.go:606] checking whether the cluster is paused
	I1018 09:01:39.028621  147618 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:39.028639  147618 host.go:66] Checking if "addons-222746" exists ...
	I1018 09:01:39.029178  147618 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 09:01:39.046582  147618 ssh_runner.go:195] Run: systemctl --version
	I1018 09:01:39.046632  147618 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 09:01:39.065450  147618 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 09:01:39.159705  147618 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:01:39.159815  147618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:01:39.188270  147618 cri.go:89] found id: "3f1e0ab974c3adb0e7bcba7193dbfcbd5250e283b2c1dcb0f52f3542be969c05"
	I1018 09:01:39.188309  147618 cri.go:89] found id: "79c91ae766bdce515a4eda632ca6ecd5c57a4454b7a97ae58fa3e91e32020d28"
	I1018 09:01:39.188312  147618 cri.go:89] found id: "2006d0829aa9898dfa1eea55359c71e06640d1be7b32092f97e7ec9b89f3ea71"
	I1018 09:01:39.188315  147618 cri.go:89] found id: "da6e806e056d472d4816ed9122b9d99d62c91c582363ba61d34c584d97fda56c"
	I1018 09:01:39.188318  147618 cri.go:89] found id: "a4fd53616bc11869d8845646b763db80942a4e2c50ec48595c4bfdcc5950f898"
	I1018 09:01:39.188322  147618 cri.go:89] found id: "edce1d10c783fda9fcbe60cf570e174a37959936ebce9285eb63101e393cd691"
	I1018 09:01:39.188324  147618 cri.go:89] found id: "cc9e7bafa8a6c0370a3968660871e38749909a319d5191d1d16f707458abcfca"
	I1018 09:01:39.188327  147618 cri.go:89] found id: "0f74c115de3ce217c6a4a1e4f5ee891722f4b28a8e3da1f42cb6f42b7b6a7f23"
	I1018 09:01:39.188329  147618 cri.go:89] found id: "1267812961fa39d2e5a477c40f29fdca6aefbcd66ca083b2b5ab8083ed1f114d"
	I1018 09:01:39.188339  147618 cri.go:89] found id: "1c12fcfd58686f7ebc0bf26390f7f9977bab5450b5c08604e1773c936b620473"
	I1018 09:01:39.188342  147618 cri.go:89] found id: "13543c0f3dca2e5d1884741fc895c7bdd56c7ac061eaaba1fc193a2068c33a5d"
	I1018 09:01:39.188345  147618 cri.go:89] found id: "4bf2327b6d921b2a03b0ad6117b7e519262aa09b660a7aeed6481d1dd4d135be"
	I1018 09:01:39.188347  147618 cri.go:89] found id: "3c83994993aa0d26323e700488ed01d676cb6d662f3d4e2d1dcf544bab3ade13"
	I1018 09:01:39.188350  147618 cri.go:89] found id: "430460fa55c7751bc85693cb537d90a449762549c9ce77554e35e334382aa271"
	I1018 09:01:39.188352  147618 cri.go:89] found id: "910f4bbb59848023f2078cd2cfea1447cb68096d3009bcbbe3a3c1320b6f8dde"
	I1018 09:01:39.188359  147618 cri.go:89] found id: "ebae0d10fe53d742d91554d1abb6efb1544b7edb5ced01f3a8a801486ad51fab"
	I1018 09:01:39.188365  147618 cri.go:89] found id: "703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e"
	I1018 09:01:39.188368  147618 cri.go:89] found id: "d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a"
	I1018 09:01:39.188371  147618 cri.go:89] found id: "d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c"
	I1018 09:01:39.188373  147618 cri.go:89] found id: "2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733"
	I1018 09:01:39.188376  147618 cri.go:89] found id: "976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb"
	I1018 09:01:39.188378  147618 cri.go:89] found id: "c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711"
	I1018 09:01:39.188380  147618 cri.go:89] found id: "fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778"
	I1018 09:01:39.188382  147618 cri.go:89] found id: "179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4"
	I1018 09:01:39.188385  147618 cri.go:89] found id: ""
	I1018 09:01:39.188427  147618 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:01:39.201862  147618 out.go:203] 
	W1018 09:01:39.203037  147618 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:01:39.203062  147618 out.go:285] * 
	* 
	W1018 09:01:39.206284  147618 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:01:39.207364  147618 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-222746 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (37.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1018 09:01:26.700000  134611 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1018 09:01:26.703260  134611 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1018 09:01:26.703286  134611 kapi.go:107] duration metric: took 3.295518ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.307557ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-222746 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-222746 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [6314f2b3-494e-47b8-b8fb-0fc21468cef9] Pending
helpers_test.go:352: "task-pv-pod" [6314f2b3-494e-47b8-b8fb-0fc21468cef9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [6314f2b3-494e-47b8-b8fb-0fc21468cef9] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003815959s
addons_test.go:572: (dbg) Run:  kubectl --context addons-222746 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-222746 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-222746 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-222746 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-222746 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-222746 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-222746 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [7f6068ed-1ada-411e-b471-f965920d2240] Pending
helpers_test.go:352: "task-pv-pod-restore" [7f6068ed-1ada-411e-b471-f965920d2240] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [7f6068ed-1ada-411e-b471-f965920d2240] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003853512s
addons_test.go:614: (dbg) Run:  kubectl --context addons-222746 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-222746 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-222746 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-222746 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (226.267836ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:02:03.995696  148479 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:02:03.996013  148479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:02:03.996023  148479 out.go:374] Setting ErrFile to fd 2...
	I1018 09:02:03.996027  148479 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:02:03.996270  148479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:02:03.996602  148479 mustload.go:65] Loading cluster: addons-222746
	I1018 09:02:03.997002  148479 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:02:03.997020  148479 addons.go:606] checking whether the cluster is paused
	I1018 09:02:03.997114  148479 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:02:03.997132  148479 host.go:66] Checking if "addons-222746" exists ...
	I1018 09:02:03.997499  148479 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 09:02:04.015003  148479 ssh_runner.go:195] Run: systemctl --version
	I1018 09:02:04.015062  148479 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 09:02:04.032133  148479 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 09:02:04.126653  148479 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:02:04.126734  148479 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:02:04.155092  148479 cri.go:89] found id: "3f1e0ab974c3adb0e7bcba7193dbfcbd5250e283b2c1dcb0f52f3542be969c05"
	I1018 09:02:04.155117  148479 cri.go:89] found id: "79c91ae766bdce515a4eda632ca6ecd5c57a4454b7a97ae58fa3e91e32020d28"
	I1018 09:02:04.155122  148479 cri.go:89] found id: "2006d0829aa9898dfa1eea55359c71e06640d1be7b32092f97e7ec9b89f3ea71"
	I1018 09:02:04.155127  148479 cri.go:89] found id: "da6e806e056d472d4816ed9122b9d99d62c91c582363ba61d34c584d97fda56c"
	I1018 09:02:04.155132  148479 cri.go:89] found id: "a4fd53616bc11869d8845646b763db80942a4e2c50ec48595c4bfdcc5950f898"
	I1018 09:02:04.155137  148479 cri.go:89] found id: "edce1d10c783fda9fcbe60cf570e174a37959936ebce9285eb63101e393cd691"
	I1018 09:02:04.155141  148479 cri.go:89] found id: "cc9e7bafa8a6c0370a3968660871e38749909a319d5191d1d16f707458abcfca"
	I1018 09:02:04.155145  148479 cri.go:89] found id: "0f74c115de3ce217c6a4a1e4f5ee891722f4b28a8e3da1f42cb6f42b7b6a7f23"
	I1018 09:02:04.155147  148479 cri.go:89] found id: "1267812961fa39d2e5a477c40f29fdca6aefbcd66ca083b2b5ab8083ed1f114d"
	I1018 09:02:04.155153  148479 cri.go:89] found id: "1c12fcfd58686f7ebc0bf26390f7f9977bab5450b5c08604e1773c936b620473"
	I1018 09:02:04.155155  148479 cri.go:89] found id: "13543c0f3dca2e5d1884741fc895c7bdd56c7ac061eaaba1fc193a2068c33a5d"
	I1018 09:02:04.155160  148479 cri.go:89] found id: "4bf2327b6d921b2a03b0ad6117b7e519262aa09b660a7aeed6481d1dd4d135be"
	I1018 09:02:04.155163  148479 cri.go:89] found id: "3c83994993aa0d26323e700488ed01d676cb6d662f3d4e2d1dcf544bab3ade13"
	I1018 09:02:04.155166  148479 cri.go:89] found id: "430460fa55c7751bc85693cb537d90a449762549c9ce77554e35e334382aa271"
	I1018 09:02:04.155174  148479 cri.go:89] found id: "910f4bbb59848023f2078cd2cfea1447cb68096d3009bcbbe3a3c1320b6f8dde"
	I1018 09:02:04.155185  148479 cri.go:89] found id: "ebae0d10fe53d742d91554d1abb6efb1544b7edb5ced01f3a8a801486ad51fab"
	I1018 09:02:04.155193  148479 cri.go:89] found id: "703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e"
	I1018 09:02:04.155198  148479 cri.go:89] found id: "d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a"
	I1018 09:02:04.155202  148479 cri.go:89] found id: "d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c"
	I1018 09:02:04.155207  148479 cri.go:89] found id: "2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733"
	I1018 09:02:04.155214  148479 cri.go:89] found id: "976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb"
	I1018 09:02:04.155218  148479 cri.go:89] found id: "c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711"
	I1018 09:02:04.155225  148479 cri.go:89] found id: "fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778"
	I1018 09:02:04.155229  148479 cri.go:89] found id: "179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4"
	I1018 09:02:04.155233  148479 cri.go:89] found id: ""
	I1018 09:02:04.155284  148479 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:02:04.168934  148479 out.go:203] 
	W1018 09:02:04.169981  148479 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:02:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:02:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:02:04.170005  148479 out.go:285] * 
	* 
	W1018 09:02:04.173898  148479 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:02:04.175045  148479 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-222746 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-222746 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (222.943977ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:02:04.220609  148540 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:02:04.220749  148540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:02:04.220759  148540 out.go:374] Setting ErrFile to fd 2...
	I1018 09:02:04.220766  148540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:02:04.221023  148540 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:02:04.221319  148540 mustload.go:65] Loading cluster: addons-222746
	I1018 09:02:04.221682  148540 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:02:04.221699  148540 addons.go:606] checking whether the cluster is paused
	I1018 09:02:04.221798  148540 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:02:04.221814  148540 host.go:66] Checking if "addons-222746" exists ...
	I1018 09:02:04.222217  148540 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 09:02:04.238433  148540 ssh_runner.go:195] Run: systemctl --version
	I1018 09:02:04.238523  148540 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 09:02:04.255511  148540 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 09:02:04.350544  148540 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:02:04.350623  148540 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:02:04.379907  148540 cri.go:89] found id: "3f1e0ab974c3adb0e7bcba7193dbfcbd5250e283b2c1dcb0f52f3542be969c05"
	I1018 09:02:04.379928  148540 cri.go:89] found id: "79c91ae766bdce515a4eda632ca6ecd5c57a4454b7a97ae58fa3e91e32020d28"
	I1018 09:02:04.379932  148540 cri.go:89] found id: "2006d0829aa9898dfa1eea55359c71e06640d1be7b32092f97e7ec9b89f3ea71"
	I1018 09:02:04.379936  148540 cri.go:89] found id: "da6e806e056d472d4816ed9122b9d99d62c91c582363ba61d34c584d97fda56c"
	I1018 09:02:04.379938  148540 cri.go:89] found id: "a4fd53616bc11869d8845646b763db80942a4e2c50ec48595c4bfdcc5950f898"
	I1018 09:02:04.379942  148540 cri.go:89] found id: "edce1d10c783fda9fcbe60cf570e174a37959936ebce9285eb63101e393cd691"
	I1018 09:02:04.379944  148540 cri.go:89] found id: "cc9e7bafa8a6c0370a3968660871e38749909a319d5191d1d16f707458abcfca"
	I1018 09:02:04.379949  148540 cri.go:89] found id: "0f74c115de3ce217c6a4a1e4f5ee891722f4b28a8e3da1f42cb6f42b7b6a7f23"
	I1018 09:02:04.379957  148540 cri.go:89] found id: "1267812961fa39d2e5a477c40f29fdca6aefbcd66ca083b2b5ab8083ed1f114d"
	I1018 09:02:04.379964  148540 cri.go:89] found id: "1c12fcfd58686f7ebc0bf26390f7f9977bab5450b5c08604e1773c936b620473"
	I1018 09:02:04.379968  148540 cri.go:89] found id: "13543c0f3dca2e5d1884741fc895c7bdd56c7ac061eaaba1fc193a2068c33a5d"
	I1018 09:02:04.379973  148540 cri.go:89] found id: "4bf2327b6d921b2a03b0ad6117b7e519262aa09b660a7aeed6481d1dd4d135be"
	I1018 09:02:04.379977  148540 cri.go:89] found id: "3c83994993aa0d26323e700488ed01d676cb6d662f3d4e2d1dcf544bab3ade13"
	I1018 09:02:04.379981  148540 cri.go:89] found id: "430460fa55c7751bc85693cb537d90a449762549c9ce77554e35e334382aa271"
	I1018 09:02:04.379985  148540 cri.go:89] found id: "910f4bbb59848023f2078cd2cfea1447cb68096d3009bcbbe3a3c1320b6f8dde"
	I1018 09:02:04.379996  148540 cri.go:89] found id: "ebae0d10fe53d742d91554d1abb6efb1544b7edb5ced01f3a8a801486ad51fab"
	I1018 09:02:04.380004  148540 cri.go:89] found id: "703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e"
	I1018 09:02:04.380009  148540 cri.go:89] found id: "d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a"
	I1018 09:02:04.380012  148540 cri.go:89] found id: "d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c"
	I1018 09:02:04.380014  148540 cri.go:89] found id: "2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733"
	I1018 09:02:04.380019  148540 cri.go:89] found id: "976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb"
	I1018 09:02:04.380024  148540 cri.go:89] found id: "c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711"
	I1018 09:02:04.380027  148540 cri.go:89] found id: "fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778"
	I1018 09:02:04.380039  148540 cri.go:89] found id: "179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4"
	I1018 09:02:04.380044  148540 cri.go:89] found id: ""
	I1018 09:02:04.380088  148540 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:02:04.393061  148540 out.go:203] 
	W1018 09:02:04.394121  148540 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:02:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:02:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:02:04.394137  148540 out.go:285] * 
	* 
	W1018 09:02:04.397120  148540 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:02:04.398499  148540 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-222746 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (37.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-222746 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-222746 --alsologtostderr -v=1: exit status 11 (229.434204ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:01:24.238711  144966 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:01:24.239033  144966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:24.239045  144966 out.go:374] Setting ErrFile to fd 2...
	I1018 09:01:24.239049  144966 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:24.239267  144966 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:01:24.239606  144966 mustload.go:65] Loading cluster: addons-222746
	I1018 09:01:24.240011  144966 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:24.240030  144966 addons.go:606] checking whether the cluster is paused
	I1018 09:01:24.240127  144966 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:24.240142  144966 host.go:66] Checking if "addons-222746" exists ...
	I1018 09:01:24.240520  144966 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 09:01:24.257641  144966 ssh_runner.go:195] Run: systemctl --version
	I1018 09:01:24.257702  144966 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 09:01:24.274414  144966 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 09:01:24.369469  144966 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:01:24.369568  144966 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:01:24.397924  144966 cri.go:89] found id: "3f1e0ab974c3adb0e7bcba7193dbfcbd5250e283b2c1dcb0f52f3542be969c05"
	I1018 09:01:24.397951  144966 cri.go:89] found id: "79c91ae766bdce515a4eda632ca6ecd5c57a4454b7a97ae58fa3e91e32020d28"
	I1018 09:01:24.397955  144966 cri.go:89] found id: "2006d0829aa9898dfa1eea55359c71e06640d1be7b32092f97e7ec9b89f3ea71"
	I1018 09:01:24.397959  144966 cri.go:89] found id: "da6e806e056d472d4816ed9122b9d99d62c91c582363ba61d34c584d97fda56c"
	I1018 09:01:24.397961  144966 cri.go:89] found id: "a4fd53616bc11869d8845646b763db80942a4e2c50ec48595c4bfdcc5950f898"
	I1018 09:01:24.397964  144966 cri.go:89] found id: "edce1d10c783fda9fcbe60cf570e174a37959936ebce9285eb63101e393cd691"
	I1018 09:01:24.397967  144966 cri.go:89] found id: "cc9e7bafa8a6c0370a3968660871e38749909a319d5191d1d16f707458abcfca"
	I1018 09:01:24.397969  144966 cri.go:89] found id: "0f74c115de3ce217c6a4a1e4f5ee891722f4b28a8e3da1f42cb6f42b7b6a7f23"
	I1018 09:01:24.397971  144966 cri.go:89] found id: "1267812961fa39d2e5a477c40f29fdca6aefbcd66ca083b2b5ab8083ed1f114d"
	I1018 09:01:24.397977  144966 cri.go:89] found id: "1c12fcfd58686f7ebc0bf26390f7f9977bab5450b5c08604e1773c936b620473"
	I1018 09:01:24.397982  144966 cri.go:89] found id: "13543c0f3dca2e5d1884741fc895c7bdd56c7ac061eaaba1fc193a2068c33a5d"
	I1018 09:01:24.397986  144966 cri.go:89] found id: "4bf2327b6d921b2a03b0ad6117b7e519262aa09b660a7aeed6481d1dd4d135be"
	I1018 09:01:24.397990  144966 cri.go:89] found id: "3c83994993aa0d26323e700488ed01d676cb6d662f3d4e2d1dcf544bab3ade13"
	I1018 09:01:24.397994  144966 cri.go:89] found id: "430460fa55c7751bc85693cb537d90a449762549c9ce77554e35e334382aa271"
	I1018 09:01:24.397998  144966 cri.go:89] found id: "910f4bbb59848023f2078cd2cfea1447cb68096d3009bcbbe3a3c1320b6f8dde"
	I1018 09:01:24.398007  144966 cri.go:89] found id: "ebae0d10fe53d742d91554d1abb6efb1544b7edb5ced01f3a8a801486ad51fab"
	I1018 09:01:24.398011  144966 cri.go:89] found id: "703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e"
	I1018 09:01:24.398018  144966 cri.go:89] found id: "d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a"
	I1018 09:01:24.398022  144966 cri.go:89] found id: "d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c"
	I1018 09:01:24.398027  144966 cri.go:89] found id: "2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733"
	I1018 09:01:24.398032  144966 cri.go:89] found id: "976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb"
	I1018 09:01:24.398037  144966 cri.go:89] found id: "c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711"
	I1018 09:01:24.398040  144966 cri.go:89] found id: "fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778"
	I1018 09:01:24.398043  144966 cri.go:89] found id: "179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4"
	I1018 09:01:24.398045  144966 cri.go:89] found id: ""
	I1018 09:01:24.398101  144966 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:01:24.411399  144966 out.go:203] 
	W1018 09:01:24.412667  144966 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:01:24.412687  144966 out.go:285] * 
	* 
	W1018 09:01:24.415662  144966 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:01:24.416881  144966 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-222746 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-222746
helpers_test.go:243: (dbg) docker inspect addons-222746:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "08bddbb0d829e38df6d73cb968782b51c192be16aad01170d69de1229844bb60",
	        "Created": "2025-10-18T08:58:48.818383465Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 136639,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T08:58:48.849405244Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/08bddbb0d829e38df6d73cb968782b51c192be16aad01170d69de1229844bb60/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/08bddbb0d829e38df6d73cb968782b51c192be16aad01170d69de1229844bb60/hostname",
	        "HostsPath": "/var/lib/docker/containers/08bddbb0d829e38df6d73cb968782b51c192be16aad01170d69de1229844bb60/hosts",
	        "LogPath": "/var/lib/docker/containers/08bddbb0d829e38df6d73cb968782b51c192be16aad01170d69de1229844bb60/08bddbb0d829e38df6d73cb968782b51c192be16aad01170d69de1229844bb60-json.log",
	        "Name": "/addons-222746",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-222746:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-222746",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "08bddbb0d829e38df6d73cb968782b51c192be16aad01170d69de1229844bb60",
	                "LowerDir": "/var/lib/docker/overlay2/591de738026e7bb144eb21eb5220004ccdf3b11d69324757172cf3ad4dcc222a-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/591de738026e7bb144eb21eb5220004ccdf3b11d69324757172cf3ad4dcc222a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/591de738026e7bb144eb21eb5220004ccdf3b11d69324757172cf3ad4dcc222a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/591de738026e7bb144eb21eb5220004ccdf3b11d69324757172cf3ad4dcc222a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-222746",
	                "Source": "/var/lib/docker/volumes/addons-222746/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-222746",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-222746",
	                "name.minikube.sigs.k8s.io": "addons-222746",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4bfae7c41848df1c2c55af9b1f1cbdbb399d978b3c7814464398ef7c96367b7e",
	            "SandboxKey": "/var/run/docker/netns/4bfae7c41848",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32888"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32889"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32892"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32890"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32891"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-222746": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "42:31:b6:68:ff:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4c138596d16bfd741a46ad14146c73cfc29e5eb10215236c22d54328825d7e82",
	                    "EndpointID": "f6b238c4c4ea6538597beea4d28b78b001604561f841495c56044574c6452680",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-222746",
	                        "08bddbb0d829"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-222746 -n addons-222746
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-222746 logs -n 25: (1.12484672s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-429693 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-429693   │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │ 18 Oct 25 08:58 UTC │
	│ delete  │ -p download-only-429693                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-429693   │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │ 18 Oct 25 08:58 UTC │
	│ start   │ -o=json --download-only -p download-only-234186 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-234186   │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │ 18 Oct 25 08:58 UTC │
	│ delete  │ -p download-only-234186                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-234186   │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │ 18 Oct 25 08:58 UTC │
	│ delete  │ -p download-only-429693                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-429693   │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │ 18 Oct 25 08:58 UTC │
	│ delete  │ -p download-only-234186                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-234186   │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │ 18 Oct 25 08:58 UTC │
	│ start   │ --download-only -p download-docker-014677 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-014677 │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │                     │
	│ delete  │ -p download-docker-014677                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-014677 │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │ 18 Oct 25 08:58 UTC │
	│ start   │ --download-only -p binary-mirror-818527 --alsologtostderr --binary-mirror http://127.0.0.1:41249 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-818527   │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │                     │
	│ delete  │ -p binary-mirror-818527                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-818527   │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │ 18 Oct 25 08:58 UTC │
	│ addons  │ disable dashboard -p addons-222746                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-222746          │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │                     │
	│ addons  │ enable dashboard -p addons-222746                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-222746          │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │                     │
	│ start   │ -p addons-222746 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-222746          │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │ 18 Oct 25 09:01 UTC │
	│ addons  │ addons-222746 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-222746          │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ addons  │ addons-222746 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-222746          │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ addons  │ addons-222746 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-222746          │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ addons  │ addons-222746 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-222746          │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	│ addons  │ enable headlamp -p addons-222746 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-222746          │ jenkins │ v1.37.0 │ 18 Oct 25 09:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:58:25
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:58:25.353444  135984 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:58:25.353561  135984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:58:25.353567  135984 out.go:374] Setting ErrFile to fd 2...
	I1018 08:58:25.353576  135984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:58:25.353815  135984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 08:58:25.354504  135984 out.go:368] Setting JSON to false
	I1018 08:58:25.355410  135984 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2449,"bootTime":1760775456,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:58:25.355502  135984 start.go:141] virtualization: kvm guest
	I1018 08:58:25.357290  135984 out.go:179] * [addons-222746] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 08:58:25.358525  135984 notify.go:220] Checking for updates...
	I1018 08:58:25.358546  135984 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 08:58:25.359687  135984 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:58:25.360794  135984 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 08:58:25.361941  135984 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 08:58:25.362929  135984 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 08:58:25.363919  135984 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 08:58:25.365033  135984 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:58:25.387253  135984 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 08:58:25.387328  135984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:58:25.445043  135984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-18 08:58:25.435927189 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:58:25.445196  135984 docker.go:318] overlay module found
	I1018 08:58:25.447274  135984 out.go:179] * Using the docker driver based on user configuration
	I1018 08:58:25.448505  135984 start.go:305] selected driver: docker
	I1018 08:58:25.448518  135984 start.go:925] validating driver "docker" against <nil>
	I1018 08:58:25.448529  135984 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 08:58:25.449150  135984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:58:25.502458  135984 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-18 08:58:25.493371851 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:58:25.502664  135984 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 08:58:25.502909  135984 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:58:25.504391  135984 out.go:179] * Using Docker driver with root privileges
	I1018 08:58:25.505357  135984 cni.go:84] Creating CNI manager for ""
	I1018 08:58:25.505422  135984 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:58:25.505436  135984 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 08:58:25.505509  135984 start.go:349] cluster config:
	{Name:addons-222746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-222746 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1018 08:58:25.506683  135984 out.go:179] * Starting "addons-222746" primary control-plane node in "addons-222746" cluster
	I1018 08:58:25.507697  135984 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 08:58:25.508704  135984 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 08:58:25.509619  135984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:58:25.509661  135984 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 08:58:25.509657  135984 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 08:58:25.509675  135984 cache.go:58] Caching tarball of preloaded images
	I1018 08:58:25.509759  135984 preload.go:233] Found /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 08:58:25.509772  135984 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 08:58:25.510095  135984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/config.json ...
	I1018 08:58:25.510125  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/config.json: {Name:mkdc42a5bc207c1cc977281fa28ebcc7d4fa6a6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:25.526787  135984 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 08:58:25.526941  135984 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 08:58:25.526957  135984 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 08:58:25.526961  135984 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 08:58:25.526969  135984 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 08:58:25.526977  135984 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1018 08:58:38.951436  135984 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1018 08:58:38.951476  135984 cache.go:232] Successfully downloaded all kic artifacts
	I1018 08:58:38.951516  135984 start.go:360] acquireMachinesLock for addons-222746: {Name:mk3d9c09b09d63a7cc3970bf61c61e1409029565 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 08:58:38.951643  135984 start.go:364] duration metric: took 89.833µs to acquireMachinesLock for "addons-222746"
	I1018 08:58:38.951690  135984 start.go:93] Provisioning new machine with config: &{Name:addons-222746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-222746 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 08:58:38.951776  135984 start.go:125] createHost starting for "" (driver="docker")
	I1018 08:58:38.953450  135984 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1018 08:58:38.953663  135984 start.go:159] libmachine.API.Create for "addons-222746" (driver="docker")
	I1018 08:58:38.953697  135984 client.go:168] LocalClient.Create starting
	I1018 08:58:38.953799  135984 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem
	I1018 08:58:39.062984  135984 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem
	I1018 08:58:39.678275  135984 cli_runner.go:164] Run: docker network inspect addons-222746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 08:58:39.694746  135984 cli_runner.go:211] docker network inspect addons-222746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 08:58:39.694844  135984 network_create.go:284] running [docker network inspect addons-222746] to gather additional debugging logs...
	I1018 08:58:39.694872  135984 cli_runner.go:164] Run: docker network inspect addons-222746
	W1018 08:58:39.711305  135984 cli_runner.go:211] docker network inspect addons-222746 returned with exit code 1
	I1018 08:58:39.711337  135984 network_create.go:287] error running [docker network inspect addons-222746]: docker network inspect addons-222746: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-222746 not found
	I1018 08:58:39.711374  135984 network_create.go:289] output of [docker network inspect addons-222746]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-222746 not found
	
	** /stderr **
	I1018 08:58:39.711494  135984 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 08:58:39.728479  135984 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ca7c80}
	I1018 08:58:39.728523  135984 network_create.go:124] attempt to create docker network addons-222746 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1018 08:58:39.728575  135984 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-222746 addons-222746
	I1018 08:58:39.783590  135984 network_create.go:108] docker network addons-222746 192.168.49.0/24 created
	I1018 08:58:39.783622  135984 kic.go:121] calculated static IP "192.168.49.2" for the "addons-222746" container
	I1018 08:58:39.783696  135984 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 08:58:39.799357  135984 cli_runner.go:164] Run: docker volume create addons-222746 --label name.minikube.sigs.k8s.io=addons-222746 --label created_by.minikube.sigs.k8s.io=true
	I1018 08:58:39.816949  135984 oci.go:103] Successfully created a docker volume addons-222746
	I1018 08:58:39.817051  135984 cli_runner.go:164] Run: docker run --rm --name addons-222746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-222746 --entrypoint /usr/bin/test -v addons-222746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 08:58:44.424437  135984 cli_runner.go:217] Completed: docker run --rm --name addons-222746-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-222746 --entrypoint /usr/bin/test -v addons-222746:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (4.607327013s)
	I1018 08:58:44.424465  135984 oci.go:107] Successfully prepared a docker volume addons-222746
	I1018 08:58:44.424505  135984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:58:44.424528  135984 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 08:58:44.424574  135984 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-222746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 08:58:48.748231  135984 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-222746:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.323622023s)
	I1018 08:58:48.748283  135984 kic.go:203] duration metric: took 4.323743083s to extract preloaded images to volume ...
	W1018 08:58:48.748387  135984 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 08:58:48.748421  135984 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 08:58:48.748469  135984 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 08:58:48.803301  135984 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-222746 --name addons-222746 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-222746 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-222746 --network addons-222746 --ip 192.168.49.2 --volume addons-222746:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 08:58:49.059658  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Running}}
	I1018 08:58:49.079452  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:58:49.098941  135984 cli_runner.go:164] Run: docker exec addons-222746 stat /var/lib/dpkg/alternatives/iptables
	I1018 08:58:49.142909  135984 oci.go:144] the created container "addons-222746" has a running status.
	I1018 08:58:49.142946  135984 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa...
	I1018 08:58:49.328458  135984 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 08:58:49.363105  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:58:49.381675  135984 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 08:58:49.381695  135984 kic_runner.go:114] Args: [docker exec --privileged addons-222746 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 08:58:49.432302  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:58:49.451553  135984 machine.go:93] provisionDockerMachine start ...
	I1018 08:58:49.451669  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:49.470094  135984 main.go:141] libmachine: Using SSH client type: native
	I1018 08:58:49.470312  135984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1018 08:58:49.470322  135984 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 08:58:49.601440  135984 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-222746
	
	I1018 08:58:49.601469  135984 ubuntu.go:182] provisioning hostname "addons-222746"
	I1018 08:58:49.601531  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:49.619084  135984 main.go:141] libmachine: Using SSH client type: native
	I1018 08:58:49.619380  135984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1018 08:58:49.619407  135984 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-222746 && echo "addons-222746" | sudo tee /etc/hostname
	I1018 08:58:49.763196  135984 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-222746
	
	I1018 08:58:49.763263  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:49.779905  135984 main.go:141] libmachine: Using SSH client type: native
	I1018 08:58:49.780109  135984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1018 08:58:49.780126  135984 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-222746' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-222746/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-222746' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 08:58:49.910206  135984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 08:58:49.910240  135984 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 08:58:49.910284  135984 ubuntu.go:190] setting up certificates
	I1018 08:58:49.910302  135984 provision.go:84] configureAuth start
	I1018 08:58:49.910359  135984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-222746
	I1018 08:58:49.927220  135984 provision.go:143] copyHostCerts
	I1018 08:58:49.927287  135984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 08:58:49.927393  135984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 08:58:49.927453  135984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 08:58:49.927507  135984 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.addons-222746 san=[127.0.0.1 192.168.49.2 addons-222746 localhost minikube]
	I1018 08:58:50.214928  135984 provision.go:177] copyRemoteCerts
	I1018 08:58:50.214984  135984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 08:58:50.215017  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:50.231781  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:58:50.326582  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 08:58:50.344960  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1018 08:58:50.361719  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 08:58:50.377772  135984 provision.go:87] duration metric: took 467.450843ms to configureAuth
	I1018 08:58:50.377803  135984 ubuntu.go:206] setting minikube options for container-runtime
	I1018 08:58:50.378055  135984 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:58:50.378150  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:50.395211  135984 main.go:141] libmachine: Using SSH client type: native
	I1018 08:58:50.395459  135984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 32888 <nil> <nil>}
	I1018 08:58:50.395480  135984 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 08:58:50.631215  135984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 08:58:50.631239  135984 machine.go:96] duration metric: took 1.179652002s to provisionDockerMachine
	I1018 08:58:50.631250  135984 client.go:171] duration metric: took 11.677542597s to LocalClient.Create
	I1018 08:58:50.631268  135984 start.go:167] duration metric: took 11.677605196s to libmachine.API.Create "addons-222746"
	I1018 08:58:50.631279  135984 start.go:293] postStartSetup for "addons-222746" (driver="docker")
	I1018 08:58:50.631292  135984 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 08:58:50.631345  135984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 08:58:50.631389  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:50.648401  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:58:50.746184  135984 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 08:58:50.750239  135984 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 08:58:50.750271  135984 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 08:58:50.750286  135984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 08:58:50.750351  135984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 08:58:50.750389  135984 start.go:296] duration metric: took 119.099305ms for postStartSetup
	I1018 08:58:50.750712  135984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-222746
	I1018 08:58:50.768102  135984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/config.json ...
	I1018 08:58:50.768376  135984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 08:58:50.768422  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:50.786497  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:58:50.878752  135984 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 08:58:50.883103  135984 start.go:128] duration metric: took 11.931304054s to createHost
	I1018 08:58:50.883125  135984 start.go:83] releasing machines lock for "addons-222746", held for 11.931468631s
	I1018 08:58:50.883183  135984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-222746
	I1018 08:58:50.899763  135984 ssh_runner.go:195] Run: cat /version.json
	I1018 08:58:50.899802  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:50.899866  135984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 08:58:50.899933  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:58:50.917042  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:58:50.917351  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:58:51.059990  135984 ssh_runner.go:195] Run: systemctl --version
	I1018 08:58:51.066088  135984 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 08:58:51.098934  135984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 08:58:51.103815  135984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 08:58:51.103927  135984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 08:58:51.128054  135984 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 08:58:51.128073  135984 start.go:495] detecting cgroup driver to use...
	I1018 08:58:51.128102  135984 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 08:58:51.128139  135984 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 08:58:51.143371  135984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 08:58:51.154976  135984 docker.go:218] disabling cri-docker service (if available) ...
	I1018 08:58:51.155022  135984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 08:58:51.170092  135984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 08:58:51.186368  135984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 08:58:51.266725  135984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 08:58:51.351600  135984 docker.go:234] disabling docker service ...
	I1018 08:58:51.351668  135984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 08:58:51.368757  135984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 08:58:51.380906  135984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 08:58:51.462156  135984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 08:58:51.542997  135984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 08:58:51.554883  135984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 08:58:51.569793  135984 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 08:58:51.569861  135984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:58:51.579649  135984 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 08:58:51.579719  135984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:58:51.587997  135984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:58:51.596004  135984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:58:51.604257  135984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 08:58:51.611792  135984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:58:51.619819  135984 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:58:51.632606  135984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 08:58:51.641050  135984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 08:58:51.648183  135984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 08:58:51.655289  135984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:58:51.731410  135984 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 08:58:51.826783  135984 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 08:58:51.826888  135984 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 08:58:51.830817  135984 start.go:563] Will wait 60s for crictl version
	I1018 08:58:51.830903  135984 ssh_runner.go:195] Run: which crictl
	I1018 08:58:51.834504  135984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 08:58:51.858148  135984 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 08:58:51.858252  135984 ssh_runner.go:195] Run: crio --version
	I1018 08:58:51.884663  135984 ssh_runner.go:195] Run: crio --version
	I1018 08:58:51.913103  135984 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 08:58:51.914317  135984 cli_runner.go:164] Run: docker network inspect addons-222746 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 08:58:51.930211  135984 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1018 08:58:51.934381  135984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 08:58:51.944535  135984 kubeadm.go:883] updating cluster {Name:addons-222746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-222746 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 08:58:51.944678  135984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:58:51.944742  135984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:58:51.974625  135984 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 08:58:51.974648  135984 crio.go:433] Images already preloaded, skipping extraction
	I1018 08:58:51.974712  135984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 08:58:51.998148  135984 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 08:58:51.998170  135984 cache_images.go:85] Images are preloaded, skipping loading
	I1018 08:58:51.998180  135984 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1018 08:58:51.998294  135984 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-222746 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-222746 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 08:58:51.998354  135984 ssh_runner.go:195] Run: crio config
	I1018 08:58:52.039385  135984 cni.go:84] Creating CNI manager for ""
	I1018 08:58:52.039415  135984 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:58:52.039441  135984 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 08:58:52.039473  135984 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-222746 NodeName:addons-222746 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 08:58:52.039644  135984 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-222746"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 08:58:52.039715  135984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 08:58:52.047684  135984 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 08:58:52.047743  135984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 08:58:52.055100  135984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1018 08:58:52.067047  135984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 08:58:52.083221  135984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1018 08:58:52.096523  135984 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1018 08:58:52.100309  135984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 08:58:52.110233  135984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:58:52.187299  135984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:58:52.209075  135984 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746 for IP: 192.168.49.2
	I1018 08:58:52.209098  135984 certs.go:195] generating shared ca certs ...
	I1018 08:58:52.209117  135984 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:52.209257  135984 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 08:58:52.421213  135984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt ...
	I1018 08:58:52.421249  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt: {Name:mk43cc1d9eca8b1ae9f5477a3ce778748878dcc4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:52.421431  135984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key ...
	I1018 08:58:52.421443  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key: {Name:mkd4fd3ac3b76e1f6e249c88a55986a8ea0c2f62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:52.421520  135984 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 08:58:52.805703  135984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt ...
	I1018 08:58:52.805734  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt: {Name:mke5e30a1bcc1bc16d4358d42c0f6b1df1c8176b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:52.805905  135984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key ...
	I1018 08:58:52.805917  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key: {Name:mk399fd0ff439f73c972d782761d754ce8457311 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:52.805987  135984 certs.go:257] generating profile certs ...
	I1018 08:58:52.806040  135984 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.key
	I1018 08:58:52.806054  135984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt with IP's: []
	I1018 08:58:53.017845  135984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt ...
	I1018 08:58:53.017882  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: {Name:mke03f832dafda02bdf462f2edad012119921b91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:53.018044  135984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.key ...
	I1018 08:58:53.018055  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.key: {Name:mke1c144541163258131f24fc2889eb68ee0c5fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:53.018126  135984 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.key.0804f929
	I1018 08:58:53.018145  135984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.crt.0804f929 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1018 08:58:53.142977  135984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.crt.0804f929 ...
	I1018 08:58:53.143007  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.crt.0804f929: {Name:mk33a94e3eb4a900d2b65a5fcedd873cda70dd31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:53.143169  135984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.key.0804f929 ...
	I1018 08:58:53.143182  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.key.0804f929: {Name:mkd2592ee160e138c2aee5869cbdabef8281355c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:53.143252  135984 certs.go:382] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.crt.0804f929 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.crt
	I1018 08:58:53.143349  135984 certs.go:386] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.key.0804f929 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.key
	I1018 08:58:53.143407  135984 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/proxy-client.key
	I1018 08:58:53.143426  135984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/proxy-client.crt with IP's: []
	I1018 08:58:53.376923  135984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/proxy-client.crt ...
	I1018 08:58:53.376953  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/proxy-client.crt: {Name:mkc84b0ac1d726976d83f916213be09e6d6be32d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:53.377107  135984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/proxy-client.key ...
	I1018 08:58:53.377122  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/proxy-client.key: {Name:mk64bf062c49f697d92d9d5d0e45f5a0f46edf58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:58:53.377296  135984 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 08:58:53.377331  135984 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 08:58:53.377354  135984 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 08:58:53.377392  135984 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 08:58:53.378007  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 08:58:53.395691  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 08:58:53.412421  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 08:58:53.429255  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 08:58:53.445567  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 08:58:53.462487  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 08:58:53.478999  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 08:58:53.495026  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 08:58:53.511687  135984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 08:58:53.529895  135984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 08:58:53.541478  135984 ssh_runner.go:195] Run: openssl version
	I1018 08:58:53.547220  135984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 08:58:53.557377  135984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:58:53.561107  135984 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:58:53.561159  135984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 08:58:53.595948  135984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 08:58:53.604849  135984 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 08:58:53.608487  135984 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 08:58:53.608534  135984 kubeadm.go:400] StartCluster: {Name:addons-222746 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-222746 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:58:53.608594  135984 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 08:58:53.608652  135984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 08:58:53.633996  135984 cri.go:89] found id: ""
	I1018 08:58:53.634086  135984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 08:58:53.642438  135984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 08:58:53.650776  135984 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 08:58:53.650852  135984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 08:58:53.658932  135984 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 08:58:53.658958  135984 kubeadm.go:157] found existing configuration files:
	
	I1018 08:58:53.659009  135984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 08:58:53.667800  135984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 08:58:53.667877  135984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 08:58:53.675669  135984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 08:58:53.683001  135984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 08:58:53.683062  135984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 08:58:53.690186  135984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 08:58:53.697379  135984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 08:58:53.697426  135984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 08:58:53.704535  135984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 08:58:53.711853  135984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 08:58:53.711913  135984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 08:58:53.719045  135984 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 08:58:53.752321  135984 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 08:58:53.752416  135984 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 08:58:53.771967  135984 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 08:58:53.772044  135984 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 08:58:53.772090  135984 kubeadm.go:318] OS: Linux
	I1018 08:58:53.772154  135984 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 08:58:53.772224  135984 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 08:58:53.772303  135984 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 08:58:53.772373  135984 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 08:58:53.772448  135984 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 08:58:53.772516  135984 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 08:58:53.772598  135984 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 08:58:53.772672  135984 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 08:58:53.825884  135984 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 08:58:53.826016  135984 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 08:58:53.826131  135984 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 08:58:53.832977  135984 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 08:58:53.834856  135984 out.go:252]   - Generating certificates and keys ...
	I1018 08:58:53.834953  135984 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 08:58:53.835052  135984 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 08:58:54.114772  135984 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 08:58:54.397739  135984 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 08:58:54.473360  135984 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 08:58:54.799336  135984 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 08:58:55.021604  135984 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 08:58:55.021794  135984 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-222746 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 08:58:55.080169  135984 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 08:58:55.080381  135984 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-222746 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1018 08:58:55.674976  135984 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 08:58:55.844281  135984 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 08:58:56.026064  135984 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 08:58:56.026130  135984 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 08:58:56.285221  135984 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 08:58:56.588454  135984 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 08:58:56.990256  135984 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 08:58:57.517914  135984 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 08:58:57.664020  135984 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 08:58:57.664391  135984 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 08:58:57.667805  135984 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 08:58:57.669278  135984 out.go:252]   - Booting up control plane ...
	I1018 08:58:57.669402  135984 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 08:58:57.669518  135984 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 08:58:57.670027  135984 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 08:58:57.684022  135984 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 08:58:57.684155  135984 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 08:58:57.690547  135984 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 08:58:57.690792  135984 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 08:58:57.690869  135984 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 08:58:57.783910  135984 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 08:58:57.784101  135984 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 08:58:58.785656  135984 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001972199s
	I1018 08:58:58.788468  135984 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 08:58:58.788594  135984 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1018 08:58:58.788735  135984 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 08:58:58.788902  135984 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 08:59:00.619301  135984 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.830769446s
	I1018 08:59:01.021076  135984 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.232454951s
	I1018 08:59:02.290444  135984 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501905376s
	I1018 08:59:02.300150  135984 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 08:59:02.309036  135984 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 08:59:02.316583  135984 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 08:59:02.316914  135984 kubeadm.go:318] [mark-control-plane] Marking the node addons-222746 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 08:59:02.324662  135984 kubeadm.go:318] [bootstrap-token] Using token: ysi78m.ifkobpqrcrut0qeu
	I1018 08:59:02.326067  135984 out.go:252]   - Configuring RBAC rules ...
	I1018 08:59:02.326221  135984 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 08:59:02.328913  135984 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 08:59:02.333394  135984 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 08:59:02.336153  135984 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 08:59:02.338141  135984 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 08:59:02.340196  135984 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 08:59:02.696432  135984 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 08:59:03.108635  135984 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 08:59:03.696061  135984 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 08:59:03.696868  135984 kubeadm.go:318] 
	I1018 08:59:03.696995  135984 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 08:59:03.697015  135984 kubeadm.go:318] 
	I1018 08:59:03.697133  135984 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 08:59:03.697143  135984 kubeadm.go:318] 
	I1018 08:59:03.697186  135984 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 08:59:03.697276  135984 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 08:59:03.697360  135984 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 08:59:03.697370  135984 kubeadm.go:318] 
	I1018 08:59:03.697446  135984 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 08:59:03.697472  135984 kubeadm.go:318] 
	I1018 08:59:03.697568  135984 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 08:59:03.697580  135984 kubeadm.go:318] 
	I1018 08:59:03.697673  135984 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 08:59:03.697757  135984 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 08:59:03.697835  135984 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 08:59:03.697847  135984 kubeadm.go:318] 
	I1018 08:59:03.697929  135984 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 08:59:03.697995  135984 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 08:59:03.698007  135984 kubeadm.go:318] 
	I1018 08:59:03.698075  135984 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ysi78m.ifkobpqrcrut0qeu \
	I1018 08:59:03.698195  135984 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f \
	I1018 08:59:03.698237  135984 kubeadm.go:318] 	--control-plane 
	I1018 08:59:03.698242  135984 kubeadm.go:318] 
	I1018 08:59:03.698348  135984 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 08:59:03.698360  135984 kubeadm.go:318] 
	I1018 08:59:03.698452  135984 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ysi78m.ifkobpqrcrut0qeu \
	I1018 08:59:03.698553  135984 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f 
	I1018 08:59:03.700182  135984 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 08:59:03.700283  135984 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 08:59:03.700308  135984 cni.go:84] Creating CNI manager for ""
	I1018 08:59:03.700318  135984 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:59:03.702448  135984 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 08:59:03.703566  135984 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 08:59:03.707818  135984 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 08:59:03.707918  135984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 08:59:03.720632  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 08:59:03.909971  135984 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 08:59:03.910043  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:03.910050  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-222746 minikube.k8s.io/updated_at=2025_10_18T08_59_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=addons-222746 minikube.k8s.io/primary=true
	I1018 08:59:03.919720  135984 ops.go:34] apiserver oom_adj: -16
	I1018 08:59:03.990591  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:04.491213  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:04.990936  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:05.491029  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:05.991003  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:06.491220  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:06.991286  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:07.490924  135984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 08:59:07.552618  135984 kubeadm.go:1113] duration metric: took 3.642633599s to wait for elevateKubeSystemPrivileges
	I1018 08:59:07.552662  135984 kubeadm.go:402] duration metric: took 13.944131015s to StartCluster
	I1018 08:59:07.552697  135984 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:59:07.552813  135984 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 08:59:07.553339  135984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 08:59:07.553574  135984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 08:59:07.553563  135984 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 08:59:07.553587  135984 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1018 08:59:07.553737  135984 addons.go:69] Setting yakd=true in profile "addons-222746"
	I1018 08:59:07.553740  135984 addons.go:69] Setting ingress=true in profile "addons-222746"
	I1018 08:59:07.553770  135984 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-222746"
	I1018 08:59:07.553788  135984 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:59:07.553780  135984 addons.go:69] Setting metrics-server=true in profile "addons-222746"
	I1018 08:59:07.553793  135984 addons.go:238] Setting addon ingress=true in "addons-222746"
	I1018 08:59:07.553780  135984 addons.go:69] Setting ingress-dns=true in profile "addons-222746"
	I1018 08:59:07.553840  135984 addons.go:238] Setting addon metrics-server=true in "addons-222746"
	I1018 08:59:07.553850  135984 addons.go:238] Setting addon ingress-dns=true in "addons-222746"
	I1018 08:59:07.553859  135984 addons.go:69] Setting volcano=true in profile "addons-222746"
	I1018 08:59:07.553872  135984 addons.go:69] Setting volumesnapshots=true in profile "addons-222746"
	I1018 08:59:07.553883  135984 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-222746"
	I1018 08:59:07.553765  135984 addons.go:238] Setting addon yakd=true in "addons-222746"
	I1018 08:59:07.553899  135984 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-222746"
	I1018 08:59:07.553905  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.553883  135984 addons.go:69] Setting storage-provisioner=true in profile "addons-222746"
	I1018 08:59:07.553914  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.553922  135984 addons.go:69] Setting inspektor-gadget=true in profile "addons-222746"
	I1018 08:59:07.553933  135984 addons.go:238] Setting addon storage-provisioner=true in "addons-222746"
	I1018 08:59:07.553938  135984 addons.go:238] Setting addon inspektor-gadget=true in "addons-222746"
	I1018 08:59:07.553955  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.553993  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.554012  135984 addons.go:69] Setting registry-creds=true in profile "addons-222746"
	I1018 08:59:07.554007  135984 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-222746"
	I1018 08:59:07.554031  135984 addons.go:238] Setting addon registry-creds=true in "addons-222746"
	I1018 08:59:07.554055  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.554059  135984 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-222746"
	I1018 08:59:07.554084  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.554359  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.554493  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.553861  135984 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-222746"
	I1018 08:59:07.554502  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.554512  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.554518  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.554524  135984 addons.go:69] Setting cloud-spanner=true in profile "addons-222746"
	I1018 08:59:07.554535  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.554536  135984 addons.go:238] Setting addon cloud-spanner=true in "addons-222746"
	I1018 08:59:07.554560  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.554993  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.553885  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.554513  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.554518  135984 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-222746"
	I1018 08:59:07.556391  135984 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-222746"
	I1018 08:59:07.556423  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.556471  135984 addons.go:69] Setting default-storageclass=true in profile "addons-222746"
	I1018 08:59:07.556484  135984 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-222746"
	I1018 08:59:07.556897  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.555971  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.553909  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.553874  135984 addons.go:238] Setting addon volcano=true in "addons-222746"
	I1018 08:59:07.557439  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.553995  135984 addons.go:69] Setting registry=true in profile "addons-222746"
	I1018 08:59:07.557719  135984 addons.go:238] Setting addon registry=true in "addons-222746"
	I1018 08:59:07.557768  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.554496  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.559560  135984 addons.go:69] Setting gcp-auth=true in profile "addons-222746"
	I1018 08:59:07.553889  135984 addons.go:238] Setting addon volumesnapshots=true in "addons-222746"
	I1018 08:59:07.559878  135984 mustload.go:65] Loading cluster: addons-222746
	I1018 08:59:07.560233  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.560470  135984 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 08:59:07.559416  135984 out.go:179] * Verifying Kubernetes components...
	I1018 08:59:07.561955  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.562172  135984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 08:59:07.562601  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.563169  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.569020  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.569020  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.569939  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.571327  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.608037  135984 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1018 08:59:07.609378  135984 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 08:59:07.609415  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1018 08:59:07.609494  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.618429  135984 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1018 08:59:07.619225  135984 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1018 08:59:07.623085  135984 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-222746"
	I1018 08:59:07.625294  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.625803  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.626521  135984 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 08:59:07.626535  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1018 08:59:07.626601  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.627239  135984 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 08:59:07.627258  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1018 08:59:07.627294  135984 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 08:59:07.627364  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1018 08:59:07.627310  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.627591  135984 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1018 08:59:07.629019  135984 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1018 08:59:07.629059  135984 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 08:59:07.629072  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1018 08:59:07.629131  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.629306  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1018 08:59:07.629459  135984 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:59:07.629472  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 08:59:07.629527  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.630053  135984 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1018 08:59:07.630101  135984 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1018 08:59:07.630152  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.633948  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1018 08:59:07.635176  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1018 08:59:07.637581  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1018 08:59:07.640368  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1018 08:59:07.643957  135984 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1018 08:59:07.645346  135984 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1018 08:59:07.645406  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1018 08:59:07.645461  135984 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1018 08:59:07.645484  135984 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1018 08:59:07.645563  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	W1018 08:59:07.646814  135984 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1018 08:59:07.647992  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1018 08:59:07.648077  135984 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:59:07.648679  135984 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1018 08:59:07.649415  135984 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1018 08:59:07.649438  135984 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1018 08:59:07.649512  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.652283  135984 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:59:07.653003  135984 out.go:179]   - Using image docker.io/registry:3.0.0
	I1018 08:59:07.653520  135984 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 08:59:07.653546  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1018 08:59:07.653606  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.659302  135984 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1018 08:59:07.659329  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1018 08:59:07.659390  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.662793  135984 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1018 08:59:07.663915  135984 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1018 08:59:07.663934  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1018 08:59:07.663993  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.676026  135984 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1018 08:59:07.677086  135984 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1018 08:59:07.677472  135984 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1018 08:59:07.677489  135984 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1018 08:59:07.677552  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.678466  135984 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1018 08:59:07.679648  135984 out.go:179]   - Using image docker.io/busybox:stable
	I1018 08:59:07.679723  135984 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1018 08:59:07.679739  135984 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1018 08:59:07.679796  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.681071  135984 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 08:59:07.681090  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1018 08:59:07.681148  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.690319  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.695420  135984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 08:59:07.697408  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.698003  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.704361  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.706328  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.714055  135984 addons.go:238] Setting addon default-storageclass=true in "addons-222746"
	I1018 08:59:07.717991  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:07.722216  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:07.724681  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.729058  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.732893  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.733306  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.740348  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.744317  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.749394  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.753676  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.757650  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.763688  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	W1018 08:59:07.765805  135984 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1018 08:59:07.766118  135984 retry.go:31] will retry after 310.191667ms: ssh: handshake failed: EOF
	I1018 08:59:07.774143  135984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 08:59:07.776311  135984 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 08:59:07.776431  135984 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 08:59:07.776497  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:07.808052  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:07.874664  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1018 08:59:07.885050  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 08:59:07.886777  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1018 08:59:07.887164  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1018 08:59:07.901527  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1018 08:59:07.917323  135984 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1018 08:59:07.917348  135984 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1018 08:59:07.918328  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1018 08:59:07.922345  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1018 08:59:07.929273  135984 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1018 08:59:07.929299  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1018 08:59:07.934058  135984 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1018 08:59:07.934081  135984 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1018 08:59:07.934552  135984 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1018 08:59:07.934569  135984 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1018 08:59:07.938564  135984 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1018 08:59:07.938585  135984 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1018 08:59:07.940110  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1018 08:59:07.965581  135984 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1018 08:59:07.965635  135984 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1018 08:59:07.973143  135984 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1018 08:59:07.973174  135984 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1018 08:59:07.982424  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 08:59:07.991133  135984 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1018 08:59:07.991219  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1018 08:59:07.991672  135984 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1018 08:59:07.991835  135984 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1018 08:59:07.993893  135984 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1018 08:59:07.993916  135984 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1018 08:59:08.006492  135984 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1018 08:59:08.006519  135984 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1018 08:59:08.032170  135984 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 08:59:08.032204  135984 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1018 08:59:08.035068  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1018 08:59:08.038202  135984 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1018 08:59:08.038286  135984 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1018 08:59:08.045416  135984 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1018 08:59:08.045438  135984 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1018 08:59:08.046026  135984 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1018 08:59:08.046086  135984 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1018 08:59:08.089346  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1018 08:59:08.089887  135984 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1018 08:59:08.089967  135984 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1018 08:59:08.102677  135984 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1018 08:59:08.102767  135984 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1018 08:59:08.103590  135984 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1018 08:59:08.103612  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1018 08:59:08.134715  135984 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1018 08:59:08.136859  135984 node_ready.go:35] waiting up to 6m0s for node "addons-222746" to be "Ready" ...
	I1018 08:59:08.155977  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1018 08:59:08.163549  135984 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:59:08.163646  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1018 08:59:08.185469  135984 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1018 08:59:08.185561  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1018 08:59:08.230286  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:59:08.239011  135984 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1018 08:59:08.239035  135984 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1018 08:59:08.295408  135984 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:08.295503  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1018 08:59:08.296281  135984 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1018 08:59:08.296304  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1018 08:59:08.342682  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:08.368297  135984 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1018 08:59:08.368328  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1018 08:59:08.435523  135984 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 08:59:08.435569  135984 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1018 08:59:08.484647  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1018 08:59:08.641507  135984 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-222746" context rescaled to 1 replicas
	I1018 08:59:09.111889  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.189382354s)
	I1018 08:59:09.111937  135984 addons.go:479] Verifying addon ingress=true in "addons-222746"
	I1018 08:59:09.111895  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.171752638s)
	I1018 08:59:09.111974  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.076827285s)
	I1018 08:59:09.112000  135984 addons.go:479] Verifying addon registry=true in "addons-222746"
	I1018 08:59:09.111934  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.129473176s)
	I1018 08:59:09.112033  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.022660496s)
	I1018 08:59:09.112049  135984 addons.go:479] Verifying addon metrics-server=true in "addons-222746"
	I1018 08:59:09.114227  135984 out.go:179] * Verifying registry addon...
	I1018 08:59:09.114242  135984 out.go:179] * Verifying ingress addon...
	I1018 08:59:09.114227  135984 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-222746 service yakd-dashboard -n yakd-dashboard
	
	I1018 08:59:09.116982  135984 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1018 08:59:09.116982  135984 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1018 08:59:09.119389  135984 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 08:59:09.119425  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:09.119463  135984 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 08:59:09.543910  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.31350952s)
	W1018 08:59:09.543969  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 08:59:09.543995  135984 retry.go:31] will retry after 175.552228ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1018 08:59:09.544052  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.201330957s)
	W1018 08:59:09.544095  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:09.544114  135984 retry.go:31] will retry after 148.861562ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:09.544303  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.059599111s)
	I1018 08:59:09.544335  135984 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-222746"
	I1018 08:59:09.546040  135984 out.go:179] * Verifying csi-hostpath-driver addon...
	I1018 08:59:09.548294  135984 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1018 08:59:09.552020  135984 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 08:59:09.552044  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:09.653259  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:09.653502  135984 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1018 08:59:09.653518  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:09.693418  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:09.719949  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1018 08:59:10.051354  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:59:10.140122  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:10.151939  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:10.152157  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:10.254739  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:10.254777  135984 retry.go:31] will retry after 403.262262ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:10.551493  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:10.619867  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:10.620004  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:10.659145  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:11.051788  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:11.152298  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:11.152413  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:11.551240  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:11.619800  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:11.620020  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:12.051188  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:12.151240  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:12.151392  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:12.198644  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.478635708s)
	I1018 08:59:12.198692  135984 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.539509425s)
	W1018 08:59:12.198742  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:12.198770  135984 retry.go:31] will retry after 708.576252ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:12.551295  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:12.619668  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:12.619815  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:12.640210  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:12.908084  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:13.052676  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:13.153585  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:13.153803  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:13.435669  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:13.435702  135984 retry.go:31] will retry after 488.395258ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:13.551910  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:13.620106  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:13.620274  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:13.925178  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:14.051472  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:14.151955  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:14.152134  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:14.443058  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:14.443094  135984 retry.go:31] will retry after 958.977433ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:14.551218  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:14.619673  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:14.619867  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:15.051557  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1018 08:59:15.139534  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:15.152120  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:15.152255  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:15.310975  135984 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1018 08:59:15.311038  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:15.328003  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:15.402898  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:15.435573  135984 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1018 08:59:15.448487  135984 addons.go:238] Setting addon gcp-auth=true in "addons-222746"
	I1018 08:59:15.448551  135984 host.go:66] Checking if "addons-222746" exists ...
	I1018 08:59:15.449008  135984 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 08:59:15.468892  135984 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1018 08:59:15.468946  135984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 08:59:15.487676  135984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 08:59:15.551859  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:15.619991  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:15.620128  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:15.938559  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:15.938608  135984 retry.go:31] will retry after 1.511050601s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:15.940303  135984 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1018 08:59:15.941613  135984 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1018 08:59:15.942638  135984 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1018 08:59:15.942651  135984 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1018 08:59:15.955641  135984 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1018 08:59:15.955667  135984 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1018 08:59:15.968434  135984 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 08:59:15.968458  135984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1018 08:59:15.980747  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1018 08:59:16.051532  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:16.120612  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:16.120709  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:16.268201  135984 addons.go:479] Verifying addon gcp-auth=true in "addons-222746"
	I1018 08:59:16.269451  135984 out.go:179] * Verifying gcp-auth addon...
	I1018 08:59:16.271537  135984 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1018 08:59:16.273582  135984 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1018 08:59:16.273600  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:16.551266  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:16.619913  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:16.620077  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:16.774717  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:17.051601  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:17.120045  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:17.120196  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:17.139986  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:17.274430  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:17.450676  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:17.551646  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:17.620427  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:17.620643  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:17.775271  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:59:17.976074  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:17.976107  135984 retry.go:31] will retry after 3.440906777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:18.051571  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:18.120059  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:18.120198  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:18.275208  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:18.551886  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:18.620503  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:18.620579  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:18.774484  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:19.051237  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:19.120059  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:19.120228  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:19.274657  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:19.551255  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:19.619667  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:19.619935  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:19.640050  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:19.774670  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:20.051451  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:20.119921  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:20.120043  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:20.274724  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:20.551286  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:20.619876  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:20.619995  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:20.774758  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:21.051774  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:21.120109  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:21.120278  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:21.275288  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:21.417468  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:21.551699  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:21.620937  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:21.620989  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:21.640094  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:21.774894  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:59:21.940477  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:21.940506  135984 retry.go:31] will retry after 4.245475929s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:22.050960  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:22.120417  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:22.120569  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:22.274257  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:22.550895  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:22.620492  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:22.620592  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:22.774513  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:23.051506  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:23.120055  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:23.120112  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:23.275071  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:23.552051  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:23.619615  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:23.619662  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:23.774584  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:24.051441  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:24.119946  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:24.120109  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:24.139529  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:24.274285  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:24.550837  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:24.620308  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:24.620386  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:24.773902  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:25.051869  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:25.120195  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:25.120314  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:25.275062  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:25.550909  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:25.620266  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:25.620474  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:25.773950  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:26.051629  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:26.120123  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:26.120357  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:26.186726  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:26.275248  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:26.550959  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:26.619407  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:26.619488  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:26.639628  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	W1018 08:59:26.702426  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:26.702460  135984 retry.go:31] will retry after 9.415003353s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:26.775072  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:27.052051  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:27.122104  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:27.122476  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:27.274243  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:27.550726  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:27.620245  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:27.620376  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:27.774986  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:28.051727  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:28.120137  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:28.120329  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:28.274082  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:28.551679  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:28.620031  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:28.620094  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:28.774573  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:29.051598  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:29.120084  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:29.120199  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:29.139335  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:29.274966  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:29.551475  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:29.619949  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:29.620139  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:29.775043  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:30.051556  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:30.119850  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:30.120034  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:30.274716  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:30.551370  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:30.619773  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:30.620092  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:30.775480  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:31.051306  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:31.119696  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:31.119809  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:31.139933  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:31.274836  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:31.551217  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:31.619739  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:31.619946  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:31.774798  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:32.051308  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:32.119509  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:32.119688  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:32.274103  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:32.551768  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:32.620067  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:32.620204  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:32.774558  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:33.051572  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:33.119889  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:33.120009  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:33.140056  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:33.274695  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:33.551351  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:33.619864  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:33.619979  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:33.774806  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:34.051755  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:34.120150  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:34.120322  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:34.274023  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:34.551979  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:34.620508  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:34.620605  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:34.774526  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:35.051290  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:35.119783  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:35.120015  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:35.140344  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:35.274883  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:35.551853  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:35.620051  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:35.620162  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:35.774581  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:36.051442  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:36.117617  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:36.120098  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:36.120145  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:36.274437  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:36.551061  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:36.619815  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:36.619958  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:36.640891  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:36.640931  135984 retry.go:31] will retry after 9.655087572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:36.774325  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:37.051349  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:37.119890  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:37.119892  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:37.274603  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:37.551210  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:37.619709  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:37.619767  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1018 08:59:37.640160  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:37.774704  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:38.051286  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:38.119509  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:38.119651  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:38.273994  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:38.551579  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:38.619937  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:38.620059  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:38.774606  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:39.051302  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:39.119650  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:39.119838  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:39.274307  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:39.550936  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:39.620496  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:39.620555  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:39.774398  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:40.051041  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:40.120341  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:40.120622  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:40.139896  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:40.274106  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:40.550444  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:40.619706  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:40.619935  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:40.774191  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:41.050945  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:41.120330  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:41.120515  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:41.274057  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:41.552143  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:41.619555  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:41.619778  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:41.774572  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:42.051401  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:42.120741  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:42.120984  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:42.140032  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:42.274600  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:42.551456  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:42.620041  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:42.620160  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:42.774007  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:43.051946  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:43.120161  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:43.120392  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:43.274946  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:43.551426  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:43.619850  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:43.619930  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:43.774758  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:44.051518  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:44.119903  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:44.119944  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:44.274543  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:44.551173  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:44.619537  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:44.619673  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:44.640161  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:44.774706  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:45.051408  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:45.119748  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:45.119903  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:45.274912  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:45.550925  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:45.620289  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:45.620473  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:45.774271  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:46.050814  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:46.120187  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:46.120352  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:46.274433  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:46.296623  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 08:59:46.551306  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:46.620403  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:46.620526  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:46.774456  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 08:59:46.817604  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:46.817642  135984 retry.go:31] will retry after 15.11360554s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 08:59:47.051178  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:47.119588  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:47.119783  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:47.139758  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:47.274208  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:47.550941  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:47.620258  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:47.620371  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:47.774101  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:48.051878  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:48.120278  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:48.120453  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:48.274965  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:48.551300  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:48.619650  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:48.619768  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:48.774991  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:49.051295  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:49.119774  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:49.119939  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 08:59:49.140058  135984 node_ready.go:57] node "addons-222746" has "Ready":"False" status (will retry)
	I1018 08:59:49.274767  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:49.551293  135984 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1018 08:59:49.551319  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:49.620155  135984 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1018 08:59:49.620175  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:49.620229  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:49.639340  135984 node_ready.go:49] node "addons-222746" is "Ready"
	I1018 08:59:49.639365  135984 node_ready.go:38] duration metric: took 41.502476687s for node "addons-222746" to be "Ready" ...
	I1018 08:59:49.639380  135984 api_server.go:52] waiting for apiserver process to appear ...
	I1018 08:59:49.639430  135984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 08:59:49.652438  135984 api_server.go:72] duration metric: took 42.098773159s to wait for apiserver process to appear ...
	I1018 08:59:49.652466  135984 api_server.go:88] waiting for apiserver healthz status ...
	I1018 08:59:49.652484  135984 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1018 08:59:49.656432  135984 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1018 08:59:49.657391  135984 api_server.go:141] control plane version: v1.34.1
	I1018 08:59:49.657414  135984 api_server.go:131] duration metric: took 4.941534ms to wait for apiserver health ...
	I1018 08:59:49.657423  135984 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 08:59:49.661039  135984 system_pods.go:59] 20 kube-system pods found
	I1018 08:59:49.661069  135984 system_pods.go:61] "amd-gpu-device-plugin-mcrsn" [24d32363-f3e5-407a-8e70-71806f45792c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:59:49.661078  135984 system_pods.go:61] "coredns-66bc5c9577-x2kv4" [5b9a4bb9-972b-4617-8a14-16ede824ef25] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:59:49.661085  135984 system_pods.go:61] "csi-hostpath-attacher-0" [4809ca99-ca91-4c73-95df-add141ce5e05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:59:49.661090  135984 system_pods.go:61] "csi-hostpath-resizer-0" [eae06236-c3ea-434c-8db2-116c1d0e33e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:59:49.661097  135984 system_pods.go:61] "csi-hostpathplugin-qqwps" [9dca2d2e-2ba5-4f16-9edd-dd822f74bc8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:59:49.661104  135984 system_pods.go:61] "etcd-addons-222746" [d7a03d58-c2bc-4461-abc5-1fbc5b24a5e8] Running
	I1018 08:59:49.661108  135984 system_pods.go:61] "kindnet-lxcvf" [2b46724b-6509-48f7-b259-a92095b8770c] Running
	I1018 08:59:49.661112  135984 system_pods.go:61] "kube-apiserver-addons-222746" [fd8775d1-53b8-4c5d-8208-9e68f50abd7a] Running
	I1018 08:59:49.661116  135984 system_pods.go:61] "kube-controller-manager-addons-222746" [88a8b9a0-7528-497e-bf04-7eafcfa42bd7] Running
	I1018 08:59:49.661123  135984 system_pods.go:61] "kube-ingress-dns-minikube" [65ce4998-62ad-4210-b58f-fb087d2c1c73] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:59:49.661127  135984 system_pods.go:61] "kube-proxy-pcfd2" [93edc2e1-f72d-4a33-bb10-8411bcd53919] Running
	I1018 08:59:49.661130  135984 system_pods.go:61] "kube-scheduler-addons-222746" [d75e1da5-9232-4c2b-88ca-6ce5f3617049] Running
	I1018 08:59:49.661135  135984 system_pods.go:61] "metrics-server-85b7d694d7-54dxd" [e6866e3c-67ee-41f0-998a-96a4683c915e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:59:49.661143  135984 system_pods.go:61] "nvidia-device-plugin-daemonset-bmgjg" [2fef3cca-f68a-4cf3-9e0c-1d408f959feb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:59:49.661148  135984 system_pods.go:61] "registry-6b586f9694-72mcl" [096e2907-bbfb-40af-a6e6-f38e2622bacc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:59:49.661153  135984 system_pods.go:61] "registry-creds-764b6fb674-pmfcj" [0e6a1e2c-6788-4106-9ccc-28d1199b2ffe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:59:49.661157  135984 system_pods.go:61] "registry-proxy-cmg9n" [7c8194e0-87ca-4b4e-829a-1096598061ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:59:49.661167  135984 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fg66r" [ed678d90-2372-472e-80ff-a9164446ebe0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:59:49.661177  135984 system_pods.go:61] "snapshot-controller-7d9fbc56b8-mnxz4" [b866bd05-e3a1-443c-9e26-ff9afb7fd032] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:59:49.661184  135984 system_pods.go:61] "storage-provisioner" [faa9b206-8596-40b1-b74a-19d06417800c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:59:49.661192  135984 system_pods.go:74] duration metric: took 3.763864ms to wait for pod list to return data ...
	I1018 08:59:49.661200  135984 default_sa.go:34] waiting for default service account to be created ...
	I1018 08:59:49.663219  135984 default_sa.go:45] found service account: "default"
	I1018 08:59:49.663236  135984 default_sa.go:55] duration metric: took 2.031591ms for default service account to be created ...
	I1018 08:59:49.663244  135984 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 08:59:49.666189  135984 system_pods.go:86] 20 kube-system pods found
	I1018 08:59:49.666215  135984 system_pods.go:89] "amd-gpu-device-plugin-mcrsn" [24d32363-f3e5-407a-8e70-71806f45792c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:59:49.666223  135984 system_pods.go:89] "coredns-66bc5c9577-x2kv4" [5b9a4bb9-972b-4617-8a14-16ede824ef25] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:59:49.666229  135984 system_pods.go:89] "csi-hostpath-attacher-0" [4809ca99-ca91-4c73-95df-add141ce5e05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:59:49.666234  135984 system_pods.go:89] "csi-hostpath-resizer-0" [eae06236-c3ea-434c-8db2-116c1d0e33e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:59:49.666239  135984 system_pods.go:89] "csi-hostpathplugin-qqwps" [9dca2d2e-2ba5-4f16-9edd-dd822f74bc8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:59:49.666243  135984 system_pods.go:89] "etcd-addons-222746" [d7a03d58-c2bc-4461-abc5-1fbc5b24a5e8] Running
	I1018 08:59:49.666247  135984 system_pods.go:89] "kindnet-lxcvf" [2b46724b-6509-48f7-b259-a92095b8770c] Running
	I1018 08:59:49.666253  135984 system_pods.go:89] "kube-apiserver-addons-222746" [fd8775d1-53b8-4c5d-8208-9e68f50abd7a] Running
	I1018 08:59:49.666256  135984 system_pods.go:89] "kube-controller-manager-addons-222746" [88a8b9a0-7528-497e-bf04-7eafcfa42bd7] Running
	I1018 08:59:49.666262  135984 system_pods.go:89] "kube-ingress-dns-minikube" [65ce4998-62ad-4210-b58f-fb087d2c1c73] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:59:49.666265  135984 system_pods.go:89] "kube-proxy-pcfd2" [93edc2e1-f72d-4a33-bb10-8411bcd53919] Running
	I1018 08:59:49.666269  135984 system_pods.go:89] "kube-scheduler-addons-222746" [d75e1da5-9232-4c2b-88ca-6ce5f3617049] Running
	I1018 08:59:49.666277  135984 system_pods.go:89] "metrics-server-85b7d694d7-54dxd" [e6866e3c-67ee-41f0-998a-96a4683c915e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:59:49.666283  135984 system_pods.go:89] "nvidia-device-plugin-daemonset-bmgjg" [2fef3cca-f68a-4cf3-9e0c-1d408f959feb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:59:49.666292  135984 system_pods.go:89] "registry-6b586f9694-72mcl" [096e2907-bbfb-40af-a6e6-f38e2622bacc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:59:49.666297  135984 system_pods.go:89] "registry-creds-764b6fb674-pmfcj" [0e6a1e2c-6788-4106-9ccc-28d1199b2ffe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:59:49.666306  135984 system_pods.go:89] "registry-proxy-cmg9n" [7c8194e0-87ca-4b4e-829a-1096598061ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:59:49.666311  135984 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fg66r" [ed678d90-2372-472e-80ff-a9164446ebe0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:59:49.666317  135984 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mnxz4" [b866bd05-e3a1-443c-9e26-ff9afb7fd032] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:59:49.666322  135984 system_pods.go:89] "storage-provisioner" [faa9b206-8596-40b1-b74a-19d06417800c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:59:49.666336  135984 retry.go:31] will retry after 299.950718ms: missing components: kube-dns
	I1018 08:59:49.776568  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:49.970569  135984 system_pods.go:86] 20 kube-system pods found
	I1018 08:59:49.970601  135984 system_pods.go:89] "amd-gpu-device-plugin-mcrsn" [24d32363-f3e5-407a-8e70-71806f45792c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:59:49.970608  135984 system_pods.go:89] "coredns-66bc5c9577-x2kv4" [5b9a4bb9-972b-4617-8a14-16ede824ef25] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 08:59:49.970620  135984 system_pods.go:89] "csi-hostpath-attacher-0" [4809ca99-ca91-4c73-95df-add141ce5e05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:59:49.970626  135984 system_pods.go:89] "csi-hostpath-resizer-0" [eae06236-c3ea-434c-8db2-116c1d0e33e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:59:49.970631  135984 system_pods.go:89] "csi-hostpathplugin-qqwps" [9dca2d2e-2ba5-4f16-9edd-dd822f74bc8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:59:49.970635  135984 system_pods.go:89] "etcd-addons-222746" [d7a03d58-c2bc-4461-abc5-1fbc5b24a5e8] Running
	I1018 08:59:49.970639  135984 system_pods.go:89] "kindnet-lxcvf" [2b46724b-6509-48f7-b259-a92095b8770c] Running
	I1018 08:59:49.970642  135984 system_pods.go:89] "kube-apiserver-addons-222746" [fd8775d1-53b8-4c5d-8208-9e68f50abd7a] Running
	I1018 08:59:49.970646  135984 system_pods.go:89] "kube-controller-manager-addons-222746" [88a8b9a0-7528-497e-bf04-7eafcfa42bd7] Running
	I1018 08:59:49.970652  135984 system_pods.go:89] "kube-ingress-dns-minikube" [65ce4998-62ad-4210-b58f-fb087d2c1c73] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:59:49.970656  135984 system_pods.go:89] "kube-proxy-pcfd2" [93edc2e1-f72d-4a33-bb10-8411bcd53919] Running
	I1018 08:59:49.970660  135984 system_pods.go:89] "kube-scheduler-addons-222746" [d75e1da5-9232-4c2b-88ca-6ce5f3617049] Running
	I1018 08:59:49.970666  135984 system_pods.go:89] "metrics-server-85b7d694d7-54dxd" [e6866e3c-67ee-41f0-998a-96a4683c915e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:59:49.970675  135984 system_pods.go:89] "nvidia-device-plugin-daemonset-bmgjg" [2fef3cca-f68a-4cf3-9e0c-1d408f959feb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:59:49.970680  135984 system_pods.go:89] "registry-6b586f9694-72mcl" [096e2907-bbfb-40af-a6e6-f38e2622bacc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:59:49.970685  135984 system_pods.go:89] "registry-creds-764b6fb674-pmfcj" [0e6a1e2c-6788-4106-9ccc-28d1199b2ffe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:59:49.970691  135984 system_pods.go:89] "registry-proxy-cmg9n" [7c8194e0-87ca-4b4e-829a-1096598061ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:59:49.970696  135984 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fg66r" [ed678d90-2372-472e-80ff-a9164446ebe0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:59:49.970704  135984 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mnxz4" [b866bd05-e3a1-443c-9e26-ff9afb7fd032] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:59:49.970708  135984 system_pods.go:89] "storage-provisioner" [faa9b206-8596-40b1-b74a-19d06417800c] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 08:59:49.970724  135984 retry.go:31] will retry after 357.656123ms: missing components: kube-dns
	I1018 08:59:50.051762  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:50.120267  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:50.120343  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:50.274934  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:50.332797  135984 system_pods.go:86] 20 kube-system pods found
	I1018 08:59:50.332841  135984 system_pods.go:89] "amd-gpu-device-plugin-mcrsn" [24d32363-f3e5-407a-8e70-71806f45792c] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1018 08:59:50.332851  135984 system_pods.go:89] "coredns-66bc5c9577-x2kv4" [5b9a4bb9-972b-4617-8a14-16ede824ef25] Running
	I1018 08:59:50.332863  135984 system_pods.go:89] "csi-hostpath-attacher-0" [4809ca99-ca91-4c73-95df-add141ce5e05] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1018 08:59:50.332873  135984 system_pods.go:89] "csi-hostpath-resizer-0" [eae06236-c3ea-434c-8db2-116c1d0e33e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1018 08:59:50.332879  135984 system_pods.go:89] "csi-hostpathplugin-qqwps" [9dca2d2e-2ba5-4f16-9edd-dd822f74bc8a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1018 08:59:50.332883  135984 system_pods.go:89] "etcd-addons-222746" [d7a03d58-c2bc-4461-abc5-1fbc5b24a5e8] Running
	I1018 08:59:50.332887  135984 system_pods.go:89] "kindnet-lxcvf" [2b46724b-6509-48f7-b259-a92095b8770c] Running
	I1018 08:59:50.332890  135984 system_pods.go:89] "kube-apiserver-addons-222746" [fd8775d1-53b8-4c5d-8208-9e68f50abd7a] Running
	I1018 08:59:50.332897  135984 system_pods.go:89] "kube-controller-manager-addons-222746" [88a8b9a0-7528-497e-bf04-7eafcfa42bd7] Running
	I1018 08:59:50.332904  135984 system_pods.go:89] "kube-ingress-dns-minikube" [65ce4998-62ad-4210-b58f-fb087d2c1c73] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1018 08:59:50.332911  135984 system_pods.go:89] "kube-proxy-pcfd2" [93edc2e1-f72d-4a33-bb10-8411bcd53919] Running
	I1018 08:59:50.332915  135984 system_pods.go:89] "kube-scheduler-addons-222746" [d75e1da5-9232-4c2b-88ca-6ce5f3617049] Running
	I1018 08:59:50.332919  135984 system_pods.go:89] "metrics-server-85b7d694d7-54dxd" [e6866e3c-67ee-41f0-998a-96a4683c915e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1018 08:59:50.332927  135984 system_pods.go:89] "nvidia-device-plugin-daemonset-bmgjg" [2fef3cca-f68a-4cf3-9e0c-1d408f959feb] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1018 08:59:50.332932  135984 system_pods.go:89] "registry-6b586f9694-72mcl" [096e2907-bbfb-40af-a6e6-f38e2622bacc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1018 08:59:50.332948  135984 system_pods.go:89] "registry-creds-764b6fb674-pmfcj" [0e6a1e2c-6788-4106-9ccc-28d1199b2ffe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1018 08:59:50.332962  135984 system_pods.go:89] "registry-proxy-cmg9n" [7c8194e0-87ca-4b4e-829a-1096598061ff] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1018 08:59:50.332975  135984 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fg66r" [ed678d90-2372-472e-80ff-a9164446ebe0] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:59:50.332987  135984 system_pods.go:89] "snapshot-controller-7d9fbc56b8-mnxz4" [b866bd05-e3a1-443c-9e26-ff9afb7fd032] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1018 08:59:50.332995  135984 system_pods.go:89] "storage-provisioner" [faa9b206-8596-40b1-b74a-19d06417800c] Running
	I1018 08:59:50.333005  135984 system_pods.go:126] duration metric: took 669.756244ms to wait for k8s-apps to be running ...
	I1018 08:59:50.333015  135984 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 08:59:50.333066  135984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 08:59:50.345503  135984 system_svc.go:56] duration metric: took 12.476587ms WaitForService to wait for kubelet
	I1018 08:59:50.345532  135984 kubeadm.go:586] duration metric: took 42.791874637s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 08:59:50.345553  135984 node_conditions.go:102] verifying NodePressure condition ...
	I1018 08:59:50.347974  135984 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 08:59:50.348000  135984 node_conditions.go:123] node cpu capacity is 8
	I1018 08:59:50.348020  135984 node_conditions.go:105] duration metric: took 2.462246ms to run NodePressure ...
	I1018 08:59:50.348034  135984 start.go:241] waiting for startup goroutines ...
	I1018 08:59:50.552055  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:50.620484  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:50.620515  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:50.774968  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:51.052364  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:51.121086  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:51.121127  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:51.275121  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:51.551578  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:51.620167  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:51.620204  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:51.775028  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:52.052370  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:52.119961  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:52.120021  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:52.274788  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:52.552169  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:52.619868  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:52.620939  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:52.774511  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:53.051562  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:53.119878  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:53.119928  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:53.274303  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:53.550885  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:53.620448  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:53.620458  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:53.774901  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:54.052165  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:54.120121  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:54.120191  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:54.275274  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:54.553635  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:54.620125  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:54.620153  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:54.774995  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:55.052266  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:55.153335  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:55.153381  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:55.275152  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:55.551274  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:55.619845  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:55.619894  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:55.774291  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:56.052309  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:56.122164  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:56.122562  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:56.276288  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:56.552236  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:56.621554  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:56.623038  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:56.774920  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:57.052488  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:57.120432  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:57.120437  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:57.275651  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:57.551815  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:57.620638  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:57.620984  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:57.775049  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:58.114288  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:58.119405  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:58.119421  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:58.274926  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:58.552149  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:58.620982  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:58.621015  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:58.774445  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:59.051570  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:59.120474  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:59.120816  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:59.275291  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 08:59:59.551669  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 08:59:59.620649  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 08:59:59.620650  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 08:59:59.774747  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:00.164924  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:00.164984  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:00.165068  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:00.312456  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:00.632158  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:00.632161  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:00.632242  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:00.876320  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:01.124922  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:01.124994  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:01.125305  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:01.275052  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:01.552151  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:01.620580  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:01.620632  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:01.775220  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:01.932451  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:00:02.051509  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:02.119994  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:02.120082  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:02.275059  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1018 09:00:02.489032  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:00:02.489069  135984 retry.go:31] will retry after 30.499499181s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:00:02.551782  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:02.620209  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:02.620350  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:02.775026  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:03.052196  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:03.119866  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:03.120006  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:03.274484  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:03.551403  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:03.619893  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:03.619926  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:03.774492  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:04.051545  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:04.120270  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:04.120334  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:04.275070  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:04.551991  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:04.620235  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:04.620436  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:04.774887  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:05.051919  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:05.120315  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:05.120473  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:05.275276  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:05.551608  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:05.620145  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:05.620186  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:05.774634  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:06.051480  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:06.120325  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:06.120360  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:06.275491  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:06.551818  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:06.620777  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:06.620845  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:06.775020  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:07.052177  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:07.119970  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:07.120049  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:07.274673  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:07.552057  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:07.620860  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:07.620962  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:07.774619  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:08.051664  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:08.120525  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:08.120758  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:08.274761  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:08.551939  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:08.620694  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:08.620891  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:08.774587  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:09.052273  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:09.120724  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:09.120720  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:09.275910  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:09.552288  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:09.619960  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:09.620001  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:09.774487  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:10.051584  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:10.120892  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:10.121010  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:10.275097  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:10.551431  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:10.619811  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:10.619876  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:10.774584  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:11.051994  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:11.120694  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:11.120754  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:11.274437  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:11.551372  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:11.619921  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:11.619958  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:11.774972  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:12.051736  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:12.152028  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:12.152052  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:12.274851  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:12.552730  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:12.620305  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:12.620383  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:12.775046  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:13.052707  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:13.133520  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:13.133863  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:13.275613  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:13.551804  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:13.620914  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:13.620948  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:13.774864  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:14.052014  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:14.120573  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:14.120616  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:14.275258  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:14.551490  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:14.620485  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:14.620498  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:14.774946  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:15.051792  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:15.120476  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:15.120623  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:15.275202  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:15.551511  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:15.620192  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:15.620307  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:15.774882  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:16.051938  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:16.120569  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:16.120687  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:16.275298  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:16.551365  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:16.619965  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:16.620120  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:16.774731  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:17.051599  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:17.120384  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:17.120545  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:17.274853  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:17.605421  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:17.619714  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:17.619951  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:17.774438  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:18.051783  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:18.151845  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:18.151929  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:18.274405  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:18.551516  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:18.620283  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:18.620321  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:18.775072  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:19.052532  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:19.120303  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:19.120476  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:19.274960  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:19.552494  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:19.619960  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:19.620138  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:19.774509  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:20.051384  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:20.120109  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:20.120234  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:20.275282  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:20.552392  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:20.620348  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:20.620419  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:20.775095  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:21.052340  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:21.119753  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:21.119960  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:21.274608  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:21.551570  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:21.620110  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:21.620164  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:21.774803  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:22.051415  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:22.152473  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:22.152524  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:22.275226  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:22.552269  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:22.619851  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:22.619907  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:22.774571  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:23.052321  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:23.122451  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:23.122854  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:23.275207  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:23.552593  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:23.621557  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:23.621854  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:23.776066  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:24.052065  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:24.121338  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:24.121395  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:24.276242  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:24.551685  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:24.620915  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:24.620969  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:24.775622  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:25.052405  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:25.120280  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:25.120340  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:25.275225  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:25.551173  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:25.621094  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:25.621240  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:25.775134  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:26.052427  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:26.120205  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:26.120238  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:26.275118  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:26.552455  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:26.620389  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:26.620389  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:26.775344  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:27.051945  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:27.120767  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:27.120903  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:27.274771  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:27.552165  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:27.620926  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:27.621028  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:27.775094  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:28.052515  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:28.152792  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:28.152900  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:28.274649  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:28.551954  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:28.620716  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:28.620878  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:28.774152  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:29.052575  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:29.120531  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:29.120690  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:29.274560  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:29.552659  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:29.620399  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:29.620426  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:29.774934  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:30.052057  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:30.120861  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:30.121033  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:30.275046  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:30.553098  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:30.620789  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:30.620983  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:30.774777  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:31.052222  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:31.153245  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:31.153272  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:31.275155  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:31.552460  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:31.620765  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:31.620848  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:31.774486  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:32.051148  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:32.151642  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:32.151675  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:32.274956  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:32.552296  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:32.620029  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:32.620125  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:32.774949  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:32.989072  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1018 09:00:33.053011  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:33.120759  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:33.121131  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:33.277846  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:33.551763  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:33.620232  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:33.620394  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1018 09:00:33.668227  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:00:33.668257  135984 retry.go:31] will retry after 34.029741282s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1018 09:00:33.775258  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:34.051493  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:34.120113  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:34.120139  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1018 09:00:34.274928  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:34.552188  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:34.621038  135984 kapi.go:107] duration metric: took 1m25.504049422s to wait for kubernetes.io/minikube-addons=registry ...
	I1018 09:00:34.621273  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:34.775374  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:35.053088  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:35.120572  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:35.275169  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:35.552079  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:35.620794  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:35.774466  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:36.051012  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:36.120450  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:36.309032  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:36.552685  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:36.620396  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:36.775136  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:37.053032  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:37.120355  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:37.275677  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:37.551907  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:37.621475  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:37.774888  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:38.052350  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:38.120265  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:38.275136  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:38.553076  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:38.620566  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:38.775252  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:39.051515  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:39.120347  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:39.275233  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:39.551802  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:39.620378  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:39.774850  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:40.052017  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:40.120368  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:40.274951  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:40.552478  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:40.621310  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:40.775363  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:41.051140  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:41.120629  135984 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1018 09:00:41.274010  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:41.552263  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:41.619906  135984 kapi.go:107] duration metric: took 1m32.502923373s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1018 09:00:41.774453  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:42.064169  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:42.274674  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:42.552176  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:42.774971  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:43.052993  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:43.274930  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:43.552107  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:43.774843  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:44.052672  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:44.275026  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:44.552360  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:44.775122  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:45.052491  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:45.275227  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:45.551772  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:45.774444  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:46.053008  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:46.274735  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:46.552635  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:46.774471  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:47.051350  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1018 09:00:47.275036  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:47.552725  135984 kapi.go:107] duration metric: took 1m38.004427241s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1018 09:00:47.774942  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:48.274806  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:48.774882  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:49.275089  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:49.774901  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:50.275049  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:50.774663  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:51.274778  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:51.774486  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:52.274610  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:52.775199  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:53.275516  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:53.775138  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:54.274344  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:54.774739  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:55.275348  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:55.774652  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:56.274433  135984 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1018 09:00:56.774280  135984 kapi.go:107] duration metric: took 1m40.502752737s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1018 09:00:56.775945  135984 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-222746 cluster.
	I1018 09:00:56.777117  135984 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1018 09:00:56.778026  135984 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1018 09:01:07.698205  135984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1018 09:01:08.240178  135984 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1018 09:01:08.240315  135984 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1018 09:01:08.242730  135984 out.go:179] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, ingress-dns, nvidia-device-plugin, registry-creds, storage-provisioner-rancher, cloud-spanner, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1018 09:01:08.243911  135984 addons.go:514] duration metric: took 2m0.690316433s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner ingress-dns nvidia-device-plugin registry-creds storage-provisioner-rancher cloud-spanner metrics-server yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1018 09:01:08.243967  135984 start.go:246] waiting for cluster config update ...
	I1018 09:01:08.243994  135984 start.go:255] writing updated cluster config ...
	I1018 09:01:08.244249  135984 ssh_runner.go:195] Run: rm -f paused
	I1018 09:01:08.248064  135984 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:01:08.251617  135984 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x2kv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:08.255374  135984 pod_ready.go:94] pod "coredns-66bc5c9577-x2kv4" is "Ready"
	I1018 09:01:08.255400  135984 pod_ready.go:86] duration metric: took 3.759711ms for pod "coredns-66bc5c9577-x2kv4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:08.257353  135984 pod_ready.go:83] waiting for pod "etcd-addons-222746" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:08.260769  135984 pod_ready.go:94] pod "etcd-addons-222746" is "Ready"
	I1018 09:01:08.260790  135984 pod_ready.go:86] duration metric: took 3.418985ms for pod "etcd-addons-222746" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:08.262759  135984 pod_ready.go:83] waiting for pod "kube-apiserver-addons-222746" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:08.266034  135984 pod_ready.go:94] pod "kube-apiserver-addons-222746" is "Ready"
	I1018 09:01:08.266054  135984 pod_ready.go:86] duration metric: took 3.275246ms for pod "kube-apiserver-addons-222746" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:08.267618  135984 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-222746" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:08.652025  135984 pod_ready.go:94] pod "kube-controller-manager-addons-222746" is "Ready"
	I1018 09:01:08.652059  135984 pod_ready.go:86] duration metric: took 384.421132ms for pod "kube-controller-manager-addons-222746" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:08.852923  135984 pod_ready.go:83] waiting for pod "kube-proxy-pcfd2" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:09.251435  135984 pod_ready.go:94] pod "kube-proxy-pcfd2" is "Ready"
	I1018 09:01:09.251468  135984 pod_ready.go:86] duration metric: took 398.496243ms for pod "kube-proxy-pcfd2" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:09.452332  135984 pod_ready.go:83] waiting for pod "kube-scheduler-addons-222746" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:09.851747  135984 pod_ready.go:94] pod "kube-scheduler-addons-222746" is "Ready"
	I1018 09:01:09.851777  135984 pod_ready.go:86] duration metric: took 399.41554ms for pod "kube-scheduler-addons-222746" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:01:09.851793  135984 pod_ready.go:40] duration metric: took 1.603694979s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:01:09.895295  135984 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:01:09.896972  135984 out.go:179] * Done! kubectl is now configured to use "addons-222746" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:01:21 addons-222746 crio[772]: time="2025-10-18T09:01:21.030464035Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:01:21 addons-222746 crio[772]: time="2025-10-18T09:01:21.036232594Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:01:21 addons-222746 crio[772]: time="2025-10-18T09:01:21.036918617Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:01:21 addons-222746 crio[772]: time="2025-10-18T09:01:21.074299512Z" level=info msg="Created container 85c255ae59d1e5328acebabefd501b0acc2e7ca7f163a5ce64aa00d8a0c6df47: local-path-storage/helper-pod-create-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4/helper-pod" id=52f596cb-20a7-4e36-ba8b-2f7a41753bce name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:01:21 addons-222746 crio[772]: time="2025-10-18T09:01:21.074927783Z" level=info msg="Starting container: 85c255ae59d1e5328acebabefd501b0acc2e7ca7f163a5ce64aa00d8a0c6df47" id=853087e7-c702-4997-8835-0cc36ad4d9b4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:01:21 addons-222746 crio[772]: time="2025-10-18T09:01:21.077079972Z" level=info msg="Started container" PID=6683 containerID=85c255ae59d1e5328acebabefd501b0acc2e7ca7f163a5ce64aa00d8a0c6df47 description=local-path-storage/helper-pod-create-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4/helper-pod id=853087e7-c702-4997-8835-0cc36ad4d9b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c7eca4c10e91a36b786b02c0f61753a190ed8ac536b478ae5626ddda94d3a78
	Oct 18 09:01:22 addons-222746 crio[772]: time="2025-10-18T09:01:22.419035698Z" level=info msg="Stopping pod sandbox: 3c7eca4c10e91a36b786b02c0f61753a190ed8ac536b478ae5626ddda94d3a78" id=327f6dbb-248c-4e9a-8bbb-6c9ff9b2ba83 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 09:01:22 addons-222746 crio[772]: time="2025-10-18T09:01:22.419285994Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4 Namespace:local-path-storage ID:3c7eca4c10e91a36b786b02c0f61753a190ed8ac536b478ae5626ddda94d3a78 UID:13b752bf-d294-411c-bd81-c7ed27eaee4a NetNS:/var/run/netns/6f051c0c-3f9f-46e0-b742-5445f2c78b07 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000dc0010}] Aliases:map[]}"
	Oct 18 09:01:22 addons-222746 crio[772]: time="2025-10-18T09:01:22.419395355Z" level=info msg="Deleting pod local-path-storage_helper-pod-create-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4 from CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:01:22 addons-222746 crio[772]: time="2025-10-18T09:01:22.434338708Z" level=info msg="Stopped pod sandbox: 3c7eca4c10e91a36b786b02c0f61753a190ed8ac536b478ae5626ddda94d3a78" id=327f6dbb-248c-4e9a-8bbb-6c9ff9b2ba83 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 09:01:24 addons-222746 crio[772]: time="2025-10-18T09:01:24.488612818Z" level=info msg="Running pod sandbox: default/test-local-path/POD" id=35a5d422-d0f4-4493-a155-8821464a411c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:01:24 addons-222746 crio[772]: time="2025-10-18T09:01:24.488724936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:01:24 addons-222746 crio[772]: time="2025-10-18T09:01:24.496372283Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:cf3210d009d0fefed0ded54a643396964bfa49955067e8d5b106ac536cf1b679 UID:a9734dff-ca59-4514-a7b7-ba73703cd2e2 NetNS:/var/run/netns/badb4644-6052-45ef-848e-29016cb069a6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00053aa28}] Aliases:map[]}"
	Oct 18 09:01:24 addons-222746 crio[772]: time="2025-10-18T09:01:24.496408981Z" level=info msg="Adding pod default_test-local-path to CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:01:24 addons-222746 crio[772]: time="2025-10-18T09:01:24.509005201Z" level=info msg="Got pod network &{Name:test-local-path Namespace:default ID:cf3210d009d0fefed0ded54a643396964bfa49955067e8d5b106ac536cf1b679 UID:a9734dff-ca59-4514-a7b7-ba73703cd2e2 NetNS:/var/run/netns/badb4644-6052-45ef-848e-29016cb069a6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00053aa28}] Aliases:map[]}"
	Oct 18 09:01:24 addons-222746 crio[772]: time="2025-10-18T09:01:24.509187091Z" level=info msg="Checking pod default_test-local-path for CNI network kindnet (type=ptp)"
	Oct 18 09:01:24 addons-222746 crio[772]: time="2025-10-18T09:01:24.510197114Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:01:24 addons-222746 crio[772]: time="2025-10-18T09:01:24.511154225Z" level=info msg="Ran pod sandbox cf3210d009d0fefed0ded54a643396964bfa49955067e8d5b106ac536cf1b679 with infra container: default/test-local-path/POD" id=35a5d422-d0f4-4493-a155-8821464a411c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:01:24 addons-222746 crio[772]: time="2025-10-18T09:01:24.512516749Z" level=info msg="Checking image status: busybox:stable" id=15a94e69-c200-446b-83ad-9a371af2e5fc name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:01:24 addons-222746 crio[772]: time="2025-10-18T09:01:24.513160328Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Oct 18 09:01:24 addons-222746 crio[772]: time="2025-10-18T09:01:24.513235686Z" level=info msg="Image busybox:stable not found" id=15a94e69-c200-446b-83ad-9a371af2e5fc name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:01:24 addons-222746 crio[772]: time="2025-10-18T09:01:24.513330569Z" level=info msg="Neither image nor artfiact busybox:stable found" id=15a94e69-c200-446b-83ad-9a371af2e5fc name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:01:24 addons-222746 crio[772]: time="2025-10-18T09:01:24.514128116Z" level=info msg="Pulling image: busybox:stable" id=aefd14f7-a65f-47e9-bef9-a47a1356cccf name=/runtime.v1.ImageService/PullImage
	Oct 18 09:01:24 addons-222746 crio[772]: time="2025-10-18T09:01:24.514269582Z" level=info msg="Resolved \"busybox\" as an alias (/etc/containers/registries.conf.d/shortnames.conf)"
	Oct 18 09:01:24 addons-222746 crio[772]: time="2025-10-18T09:01:24.515818696Z" level=info msg="Trying to access \"docker.io/library/busybox:stable\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                                          NAMESPACE
	85c255ae59d1e       docker.io/library/busybox@sha256:023917ec6a886d0e8e15f28fb543515a5fcd8d938edb091e8147db4efed388ee                                            4 seconds ago        Exited              helper-pod                               0                   3c7eca4c10e91       helper-pod-create-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4   local-path-storage
	c9cf2f6523ffc       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          12 seconds ago       Running             busybox                                  0                   cd66c285fc052       busybox                                                      default
	713078a1d87ab       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 29 seconds ago       Running             gcp-auth                                 0                   70cd7615709b8       gcp-auth-78565c9fb4-9z7q6                                    gcp-auth
	3f1e0ab974c3a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          38 seconds ago       Running             csi-snapshotter                          0                   df7180f699ba2       csi-hostpathplugin-qqwps                                     kube-system
	79c91ae766bdc       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          40 seconds ago       Running             csi-provisioner                          0                   df7180f699ba2       csi-hostpathplugin-qqwps                                     kube-system
	2006d0829aa98       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            42 seconds ago       Running             liveness-probe                           0                   df7180f699ba2       csi-hostpathplugin-qqwps                                     kube-system
	da6e806e056d4       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           43 seconds ago       Running             hostpath                                 0                   df7180f699ba2       csi-hostpathplugin-qqwps                                     kube-system
	a4fd53616bc11       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                44 seconds ago       Running             node-driver-registrar                    0                   df7180f699ba2       csi-hostpathplugin-qqwps                                     kube-system
	b9be2f644afa5       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             45 seconds ago       Running             controller                               0                   df3f2454b3603       ingress-nginx-controller-675c5ddd98-hvm5h                    ingress-nginx
	52255095f8932       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            48 seconds ago       Running             gadget                                   0                   a0caedd357fb0       gadget-7pfdj                                                 gadget
	2ede7d02d14dc       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             51 seconds ago       Exited              patch                                    2                   291ea63ed51a9       gcp-auth-certs-patch-ht4hn                                   gcp-auth
	edce1d10c783f       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              51 seconds ago       Running             registry-proxy                           0                   a9f67c9dd7ec8       registry-proxy-cmg9n                                         kube-system
	cc9e7bafa8a6c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      54 seconds ago       Running             volume-snapshot-controller               0                   3794d4f3b0255       snapshot-controller-7d9fbc56b8-fg66r                         kube-system
	59e73a4fa0155       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   54 seconds ago       Exited              create                                   0                   2eba83e91886a       gcp-auth-certs-create-hnm75                                  gcp-auth
	0f74c115de3ce       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   54 seconds ago       Running             csi-external-health-monitor-controller   0                   df7180f699ba2       csi-hostpathplugin-qqwps                                     kube-system
	1267812961fa3       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     55 seconds ago       Running             nvidia-device-plugin-ctr                 0                   2d9b6ece5646b       nvidia-device-plugin-daemonset-bmgjg                         kube-system
	1c12fcfd58686       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     About a minute ago   Running             amd-gpu-device-plugin                    0                   109b10224296e       amd-gpu-device-plugin-mcrsn                                  kube-system
	13543c0f3dca2       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   5daf5bd9730b1       csi-hostpath-resizer-0                                       kube-system
	fe7a994e6964d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              patch                                    0                   b98a446e595c7       ingress-nginx-admission-patch-5jjnn                          ingress-nginx
	4bf2327b6d921       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   9a163fb9ac1b8       csi-hostpath-attacher-0                                      kube-system
	3c83994993aa0       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   1d6f37c793c35       snapshot-controller-7d9fbc56b8-mnxz4                         kube-system
	70de5b6959505       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              create                                   0                   03480266670ee       ingress-nginx-admission-create-2kfb4                         ingress-nginx
	430460fa55c77       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   d044509ef50ec       metrics-server-85b7d694d7-54dxd                              kube-system
	dbca38ce17214       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               About a minute ago   Running             cloud-spanner-emulator                   0                   ddf65e157517d       cloud-spanner-emulator-86bd5cbb97-s6s56                      default
	e14e163f8bfc4       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   c46bb569eb9e9       local-path-provisioner-648f6765c9-k7dw9                      local-path-storage
	910f4bbb59848       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   0df4aaba65fe4       kube-ingress-dns-minikube                                    kube-system
	ebae0d10fe53d       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   b03e7ef28e583       registry-6b586f9694-72mcl                                    kube-system
	b0ebf5a6f8628       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   071450d6c9dee       yakd-dashboard-5ff678cb9-2vdz9                               yakd-dashboard
	703f4d898ac52       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   375dbf8dacc1f       coredns-66bc5c9577-x2kv4                                     kube-system
	d058db45cb842       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   27ed68dec04f3       storage-provisioner                                          kube-system
	d3608cbd20f63       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             2 minutes ago        Running             kindnet-cni                              0                   86cec15cc10d8       kindnet-lxcvf                                                kube-system
	2026a4d802754       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             2 minutes ago        Running             kube-proxy                               0                   732a5f671d774       kube-proxy-pcfd2                                             kube-system
	976f8ced94e7b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   ef1a81f013fce       kube-controller-manager-addons-222746                        kube-system
	c2f5337233ca0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   1ea5156282dc0       kube-apiserver-addons-222746                                 kube-system
	fce4b4ac493ec       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   4af2070922844       kube-scheduler-addons-222746                                 kube-system
	179aeead4dbf5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   aadc14daf1b57       etcd-addons-222746                                           kube-system
	
	
	==> coredns [703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e] <==
	[INFO] 10.244.0.17:39762 - 63781 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003074079s
	[INFO] 10.244.0.17:34824 - 30179 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000076374s
	[INFO] 10.244.0.17:34824 - 29867 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000109864s
	[INFO] 10.244.0.17:43174 - 10110 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000081442s
	[INFO] 10.244.0.17:43174 - 9826 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000138753s
	[INFO] 10.244.0.17:56304 - 37343 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00007235s
	[INFO] 10.244.0.17:56304 - 37102 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.00010707s
	[INFO] 10.244.0.17:37240 - 57338 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000125775s
	[INFO] 10.244.0.17:37240 - 56842 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000167565s
	[INFO] 10.244.0.22:45746 - 25868 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000175257s
	[INFO] 10.244.0.22:57686 - 27459 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000247817s
	[INFO] 10.244.0.22:60907 - 8988 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011483s
	[INFO] 10.244.0.22:38646 - 62741 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000146743s
	[INFO] 10.244.0.22:39676 - 64670 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000133382s
	[INFO] 10.244.0.22:54732 - 56052 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000161153s
	[INFO] 10.244.0.22:52478 - 32684 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.00526732s
	[INFO] 10.244.0.22:57526 - 15357 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.007030857s
	[INFO] 10.244.0.22:41618 - 39020 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005737374s
	[INFO] 10.244.0.22:51352 - 11167 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.006772922s
	[INFO] 10.244.0.22:58709 - 60983 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005840989s
	[INFO] 10.244.0.22:58707 - 21956 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006377774s
	[INFO] 10.244.0.22:50057 - 34233 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00433407s
	[INFO] 10.244.0.22:52291 - 57953 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006008205s
	[INFO] 10.244.0.22:42173 - 26500 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001867948s
	[INFO] 10.244.0.22:39812 - 53887 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.002021191s
	
	
	==> describe nodes <==
	Name:               addons-222746
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-222746
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=addons-222746
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T08_59_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-222746
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-222746"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 08:59:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-222746
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:01:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:01:06 +0000   Sat, 18 Oct 2025 08:58:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:01:06 +0000   Sat, 18 Oct 2025 08:58:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:01:06 +0000   Sat, 18 Oct 2025 08:58:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:01:06 +0000   Sat, 18 Oct 2025 08:59:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-222746
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                2ff719aa-4e75-48be-b689-a480c6c5bd53
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  default                     cloud-spanner-emulator-86bd5cbb97-s6s56      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  default                     test-local-path                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         6s
	  gadget                      gadget-7pfdj                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  gcp-auth                    gcp-auth-78565c9fb4-9z7q6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-hvm5h    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         2m16s
	  kube-system                 amd-gpu-device-plugin-mcrsn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-x2kv4                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m17s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 csi-hostpathplugin-qqwps                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 etcd-addons-222746                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m22s
	  kube-system                 kindnet-lxcvf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-addons-222746                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-addons-222746        200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-proxy-pcfd2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-addons-222746                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 metrics-server-85b7d694d7-54dxd              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         2m17s
	  kube-system                 nvidia-device-plugin-daemonset-bmgjg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 registry-6b586f9694-72mcl                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 registry-creds-764b6fb674-pmfcj              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 registry-proxy-cmg9n                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 snapshot-controller-7d9fbc56b8-fg66r         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 snapshot-controller-7d9fbc56b8-mnxz4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  local-path-storage          local-path-provisioner-648f6765c9-k7dw9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-2vdz9               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     2m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m16s                  kube-proxy       
	  Normal  Starting                 2m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m27s (x8 over 2m27s)  kubelet          Node addons-222746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m27s (x8 over 2m27s)  kubelet          Node addons-222746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m27s (x8 over 2m27s)  kubelet          Node addons-222746 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m22s                  kubelet          Node addons-222746 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m22s                  kubelet          Node addons-222746 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m22s                  kubelet          Node addons-222746 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m18s                  node-controller  Node addons-222746 event: Registered Node addons-222746 in Controller
	  Normal  NodeReady                96s                    kubelet          Node addons-222746 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 0a 94 4d a0 94 08 06
	[Oct18 08:47] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de f1 9f ef 45 d3 08 06
	[  +1.236229] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 0b 85 04 a3 f5 08 06
	[  +0.033854] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de bb 0a 74 cb d6 08 06
	[  +6.253384] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 3c 8d be 8e e6 08 06
	[ +33.235683] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 e6 ac 3b 48 69 08 06
	[  +1.042880] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 8d 77 32 98 b4 08 06
	[  +0.041586] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b2 be 52 7d 83 17 08 06
	[  +6.441556] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000023] ll header: 00000000: ff ff ff ff ff ff ba ff c0 14 64 ef 08 06
	[Oct18 08:48] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff c6 65 d6 52 68 2d 08 06
	[  +0.926785] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 b0 e9 40 f3 44 08 06
	[  +0.035109] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	
	
	==> etcd [179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4] <==
	{"level":"warn","ts":"2025-10-18T08:59:00.065665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:00.072589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:00.078643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:00.085438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:00.091250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:00.101717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:00.108527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:00.115761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:00.161998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:09.936246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:09.943314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:37.043818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:37.050153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:37.067025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T08:59:37.073412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60664","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:00:00.162737Z","caller":"traceutil/trace.go:172","msg":"trace[1548306592] linearizableReadLoop","detail":"{readStateIndex:1027; appliedIndex:1027; }","duration":"112.250124ms","start":"2025-10-18T09:00:00.050464Z","end":"2025-10-18T09:00:00.162714Z","steps":["trace[1548306592] 'read index received'  (duration: 112.240779ms)","trace[1548306592] 'applied index is now lower than readState.Index'  (duration: 7.785µs)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:00:00.162932Z","caller":"traceutil/trace.go:172","msg":"trace[375874786] transaction","detail":"{read_only:false; response_revision:1005; number_of_response:1; }","duration":"223.543871ms","start":"2025-10-18T08:59:59.939364Z","end":"2025-10-18T09:00:00.162908Z","steps":["trace[375874786] 'process raft request'  (duration: 223.384271ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:00:00.162963Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"112.480776ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T09:00:00.163041Z","caller":"traceutil/trace.go:172","msg":"trace[412720658] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1005; }","duration":"112.572864ms","start":"2025-10-18T09:00:00.050454Z","end":"2025-10-18T09:00:00.163027Z","steps":["trace[412720658] 'agreement among raft nodes before linearized reading'  (duration: 112.434711ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:00:00.630570Z","caller":"traceutil/trace.go:172","msg":"trace[872626465] transaction","detail":"{read_only:false; response_revision:1006; number_of_response:1; }","duration":"126.971909ms","start":"2025-10-18T09:00:00.503578Z","end":"2025-10-18T09:00:00.630550Z","steps":["trace[872626465] 'process raft request'  (duration: 126.865631ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:00:00.875069Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.327333ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-10-18T09:00:00.875101Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.57661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T09:00:00.875141Z","caller":"traceutil/trace.go:172","msg":"trace[683937687] range","detail":"{range_begin:/registry/daemonsets; range_end:; response_count:0; response_revision:1006; }","duration":"119.412098ms","start":"2025-10-18T09:00:00.755713Z","end":"2025-10-18T09:00:00.875125Z","steps":["trace[683937687] 'range keys from in-memory index tree'  (duration: 119.249563ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:00:00.875153Z","caller":"traceutil/trace.go:172","msg":"trace[1235244990] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1006; }","duration":"101.633783ms","start":"2025-10-18T09:00:00.773505Z","end":"2025-10-18T09:00:00.875138Z","steps":["trace[1235244990] 'range keys from in-memory index tree'  (duration: 101.510301ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:00:17.603623Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"119.902594ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128040711672565207 > lease_revoke:<id:70cc99f68b280c8c>","response":"size:29"}
	
	
	==> gcp-auth [713078a1d87ab4685bbcbf8d1d3f5e0074bcda5e9a5a2667fa4c20f0f81d9fc7] <==
	2025/10/18 09:00:55 GCP Auth Webhook started!
	2025/10/18 09:01:10 Ready to marshal response ...
	2025/10/18 09:01:10 Ready to write response ...
	2025/10/18 09:01:10 Ready to marshal response ...
	2025/10/18 09:01:10 Ready to write response ...
	2025/10/18 09:01:10 Ready to marshal response ...
	2025/10/18 09:01:10 Ready to write response ...
	2025/10/18 09:01:19 Ready to marshal response ...
	2025/10/18 09:01:19 Ready to write response ...
	2025/10/18 09:01:19 Ready to marshal response ...
	2025/10/18 09:01:19 Ready to write response ...
	
	
	==> kernel <==
	 09:01:25 up 43 min,  0 user,  load average: 1.48, 1.46, 1.31
	Linux addons-222746 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c] <==
	E1018 08:59:39.065335       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1018 08:59:39.065425       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1018 08:59:40.454471       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 08:59:40.454500       1 metrics.go:72] Registering metrics
	I1018 08:59:40.454545       1 controller.go:711] "Syncing nftables rules"
	I1018 08:59:49.072359       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:59:49.072413       1 main.go:301] handling current node
	I1018 08:59:59.066967       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 08:59:59.067008       1 main.go:301] handling current node
	I1018 09:00:09.066910       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:00:09.066950       1 main.go:301] handling current node
	I1018 09:00:19.065384       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:00:19.065422       1 main.go:301] handling current node
	I1018 09:00:29.071002       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:00:29.071031       1 main.go:301] handling current node
	I1018 09:00:39.065448       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:00:39.065490       1 main.go:301] handling current node
	I1018 09:00:49.066018       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:00:49.066050       1 main.go:301] handling current node
	I1018 09:00:59.065549       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:00:59.065586       1 main.go:301] handling current node
	I1018 09:01:09.066947       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:01:09.066973       1 main.go:301] handling current node
	I1018 09:01:19.068930       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:01:19.068982       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711] <==
	E1018 09:00:11.145803       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1018 09:00:11.145844       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.218.49:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.218.49:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.218.49:443: connect: connection refused" logger="UnhandledError"
	E1018 09:00:11.147610       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.218.49:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.218.49:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.218.49:443: connect: connection refused" logger="UnhandledError"
	E1018 09:00:11.153038       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.218.49:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.218.49:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.96.218.49:443: connect: connection refused" logger="UnhandledError"
	W1018 09:00:12.146174       1 handler_proxy.go:99] no RequestInfo found in the context
	W1018 09:00:12.146209       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 09:00:12.146214       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1018 09:00:12.146232       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1018 09:00:12.146281       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1018 09:00:12.147404       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1018 09:00:16.179263       1 handler_proxy.go:99] no RequestInfo found in the context
	E1018 09:00:16.179316       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1018 09:00:16.179329       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.218.49:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.96.218.49:443/apis/metrics.k8s.io/v1beta1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" logger="UnhandledError"
	I1018 09:00:16.187172       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1018 09:01:18.558652       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46284: use of closed network connection
	E1018 09:01:18.706662       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:46308: use of closed network connection
	
	
	==> kube-controller-manager [976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb] <==
	I1018 08:59:07.032719       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 08:59:07.032748       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 08:59:07.032795       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 08:59:07.032893       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 08:59:07.032976       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-222746"
	I1018 08:59:07.033477       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 08:59:07.034712       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 08:59:07.034763       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:59:07.034800       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 08:59:07.034874       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 08:59:07.034918       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 08:59:07.034931       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 08:59:07.034938       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 08:59:07.040861       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-222746" podCIDRs=["10.244.0.0/24"]
	I1018 08:59:07.052017       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 08:59:37.038788       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 08:59:37.038975       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1018 08:59:37.039027       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1018 08:59:37.057950       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1018 08:59:37.060899       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1018 08:59:37.139503       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 08:59:37.161843       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 08:59:52.038849       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1018 09:00:07.149101       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1018 09:00:07.169043       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733] <==
	I1018 08:59:08.807014       1 server_linux.go:53] "Using iptables proxy"
	I1018 08:59:08.914439       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 08:59:09.017951       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 08:59:09.018062       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 08:59:09.018161       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 08:59:09.043077       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 08:59:09.043141       1 server_linux.go:132] "Using iptables Proxier"
	I1018 08:59:09.049548       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 08:59:09.050033       1 server.go:527] "Version info" version="v1.34.1"
	I1018 08:59:09.050067       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 08:59:09.051876       1 config.go:200] "Starting service config controller"
	I1018 08:59:09.051900       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 08:59:09.051964       1 config.go:106] "Starting endpoint slice config controller"
	I1018 08:59:09.051986       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 08:59:09.052020       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 08:59:09.052033       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 08:59:09.052062       1 config.go:309] "Starting node config controller"
	I1018 08:59:09.052089       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 08:59:09.052115       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 08:59:09.152593       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 08:59:09.152605       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 08:59:09.152593       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778] <==
	E1018 08:59:00.616885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 08:59:00.617563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 08:59:00.617736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 08:59:00.617787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 08:59:00.618091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 08:59:00.618547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 08:59:00.618705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 08:59:00.618768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 08:59:00.618852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 08:59:00.618866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 08:59:00.618921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 08:59:00.618992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 08:59:00.618996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 08:59:00.619111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 08:59:00.619115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 08:59:00.619125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 08:59:01.438954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 08:59:01.510275       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 08:59:01.555615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 08:59:01.560589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 08:59:01.627377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 08:59:01.669363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 08:59:01.702441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 08:59:01.775681       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1018 08:59:04.413778       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:01:10 addons-222746 kubelet[1298]: I1018 09:01:10.516388    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8krqk\" (UniqueName: \"kubernetes.io/projected/5fc7c677-a2c0-4ad1-91d2-05d5bef7fde7-kube-api-access-8krqk\") pod \"busybox\" (UID: \"5fc7c677-a2c0-4ad1-91d2-05d5bef7fde7\") " pod="default/busybox"
	Oct 18 09:01:13 addons-222746 kubelet[1298]: I1018 09:01:13.390987    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.1863234280000001 podStartE2EDuration="3.390968779s" podCreationTimestamp="2025-10-18 09:01:10 +0000 UTC" firstStartedPulling="2025-10-18 09:01:10.743996684 +0000 UTC m=+127.909140982" lastFinishedPulling="2025-10-18 09:01:12.948642022 +0000 UTC m=+130.113786333" observedRunningTime="2025-10-18 09:01:13.389978656 +0000 UTC m=+130.555122976" watchObservedRunningTime="2025-10-18 09:01:13.390968779 +0000 UTC m=+130.556113098"
	Oct 18 09:01:19 addons-222746 kubelet[1298]: I1018 09:01:19.281147    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/13b752bf-d294-411c-bd81-c7ed27eaee4a-data\") pod \"helper-pod-create-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4\" (UID: \"13b752bf-d294-411c-bd81-c7ed27eaee4a\") " pod="local-path-storage/helper-pod-create-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4"
	Oct 18 09:01:19 addons-222746 kubelet[1298]: I1018 09:01:19.281201    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/13b752bf-d294-411c-bd81-c7ed27eaee4a-script\") pod \"helper-pod-create-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4\" (UID: \"13b752bf-d294-411c-bd81-c7ed27eaee4a\") " pod="local-path-storage/helper-pod-create-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4"
	Oct 18 09:01:19 addons-222746 kubelet[1298]: I1018 09:01:19.281259    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/13b752bf-d294-411c-bd81-c7ed27eaee4a-gcp-creds\") pod \"helper-pod-create-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4\" (UID: \"13b752bf-d294-411c-bd81-c7ed27eaee4a\") " pod="local-path-storage/helper-pod-create-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4"
	Oct 18 09:01:19 addons-222746 kubelet[1298]: I1018 09:01:19.281333    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbfbt\" (UniqueName: \"kubernetes.io/projected/13b752bf-d294-411c-bd81-c7ed27eaee4a-kube-api-access-pbfbt\") pod \"helper-pod-create-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4\" (UID: \"13b752bf-d294-411c-bd81-c7ed27eaee4a\") " pod="local-path-storage/helper-pod-create-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4"
	Oct 18 09:01:22 addons-222746 kubelet[1298]: I1018 09:01:22.505387    1298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/13b752bf-d294-411c-bd81-c7ed27eaee4a-gcp-creds\") pod \"13b752bf-d294-411c-bd81-c7ed27eaee4a\" (UID: \"13b752bf-d294-411c-bd81-c7ed27eaee4a\") "
	Oct 18 09:01:22 addons-222746 kubelet[1298]: I1018 09:01:22.505461    1298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/13b752bf-d294-411c-bd81-c7ed27eaee4a-script\") pod \"13b752bf-d294-411c-bd81-c7ed27eaee4a\" (UID: \"13b752bf-d294-411c-bd81-c7ed27eaee4a\") "
	Oct 18 09:01:22 addons-222746 kubelet[1298]: I1018 09:01:22.505494    1298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbfbt\" (UniqueName: \"kubernetes.io/projected/13b752bf-d294-411c-bd81-c7ed27eaee4a-kube-api-access-pbfbt\") pod \"13b752bf-d294-411c-bd81-c7ed27eaee4a\" (UID: \"13b752bf-d294-411c-bd81-c7ed27eaee4a\") "
	Oct 18 09:01:22 addons-222746 kubelet[1298]: I1018 09:01:22.505486    1298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13b752bf-d294-411c-bd81-c7ed27eaee4a-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "13b752bf-d294-411c-bd81-c7ed27eaee4a" (UID: "13b752bf-d294-411c-bd81-c7ed27eaee4a"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 18 09:01:22 addons-222746 kubelet[1298]: I1018 09:01:22.505518    1298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/13b752bf-d294-411c-bd81-c7ed27eaee4a-data\") pod \"13b752bf-d294-411c-bd81-c7ed27eaee4a\" (UID: \"13b752bf-d294-411c-bd81-c7ed27eaee4a\") "
	Oct 18 09:01:22 addons-222746 kubelet[1298]: I1018 09:01:22.505608    1298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13b752bf-d294-411c-bd81-c7ed27eaee4a-data" (OuterVolumeSpecName: "data") pod "13b752bf-d294-411c-bd81-c7ed27eaee4a" (UID: "13b752bf-d294-411c-bd81-c7ed27eaee4a"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 18 09:01:22 addons-222746 kubelet[1298]: I1018 09:01:22.505793    1298 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/13b752bf-d294-411c-bd81-c7ed27eaee4a-gcp-creds\") on node \"addons-222746\" DevicePath \"\""
	Oct 18 09:01:22 addons-222746 kubelet[1298]: I1018 09:01:22.505816    1298 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/13b752bf-d294-411c-bd81-c7ed27eaee4a-data\") on node \"addons-222746\" DevicePath \"\""
	Oct 18 09:01:22 addons-222746 kubelet[1298]: I1018 09:01:22.505967    1298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13b752bf-d294-411c-bd81-c7ed27eaee4a-script" (OuterVolumeSpecName: "script") pod "13b752bf-d294-411c-bd81-c7ed27eaee4a" (UID: "13b752bf-d294-411c-bd81-c7ed27eaee4a"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Oct 18 09:01:22 addons-222746 kubelet[1298]: I1018 09:01:22.507640    1298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b752bf-d294-411c-bd81-c7ed27eaee4a-kube-api-access-pbfbt" (OuterVolumeSpecName: "kube-api-access-pbfbt") pod "13b752bf-d294-411c-bd81-c7ed27eaee4a" (UID: "13b752bf-d294-411c-bd81-c7ed27eaee4a"). InnerVolumeSpecName "kube-api-access-pbfbt". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 18 09:01:22 addons-222746 kubelet[1298]: I1018 09:01:22.606733    1298 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/13b752bf-d294-411c-bd81-c7ed27eaee4a-script\") on node \"addons-222746\" DevicePath \"\""
	Oct 18 09:01:22 addons-222746 kubelet[1298]: I1018 09:01:22.606772    1298 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pbfbt\" (UniqueName: \"kubernetes.io/projected/13b752bf-d294-411c-bd81-c7ed27eaee4a-kube-api-access-pbfbt\") on node \"addons-222746\" DevicePath \"\""
	Oct 18 09:01:23 addons-222746 kubelet[1298]: I1018 09:01:23.424768    1298 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c7eca4c10e91a36b786b02c0f61753a190ed8ac536b478ae5626ddda94d3a78"
	Oct 18 09:01:23 addons-222746 kubelet[1298]: E1018 09:01:23.426229    1298 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-create-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4\" is forbidden: User \"system:node:addons-222746\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-222746' and this object" podUID="13b752bf-d294-411c-bd81-c7ed27eaee4a" pod="local-path-storage/helper-pod-create-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4"
	Oct 18 09:01:24 addons-222746 kubelet[1298]: E1018 09:01:24.186464    1298 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-create-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4\" is forbidden: User \"system:node:addons-222746\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-222746' and this object" podUID="13b752bf-d294-411c-bd81-c7ed27eaee4a" pod="local-path-storage/helper-pod-create-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4"
	Oct 18 09:01:24 addons-222746 kubelet[1298]: I1018 09:01:24.319607    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcvtb\" (UniqueName: \"kubernetes.io/projected/a9734dff-ca59-4514-a7b7-ba73703cd2e2-kube-api-access-hcvtb\") pod \"test-local-path\" (UID: \"a9734dff-ca59-4514-a7b7-ba73703cd2e2\") " pod="default/test-local-path"
	Oct 18 09:01:24 addons-222746 kubelet[1298]: I1018 09:01:24.319670    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a9734dff-ca59-4514-a7b7-ba73703cd2e2-gcp-creds\") pod \"test-local-path\" (UID: \"a9734dff-ca59-4514-a7b7-ba73703cd2e2\") " pod="default/test-local-path"
	Oct 18 09:01:24 addons-222746 kubelet[1298]: I1018 09:01:24.319700    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ac0e00a7-7476-4965-b255-10439b12d9d4\" (UniqueName: \"kubernetes.io/host-path/a9734dff-ca59-4514-a7b7-ba73703cd2e2-pvc-ac0e00a7-7476-4965-b255-10439b12d9d4\") pod \"test-local-path\" (UID: \"a9734dff-ca59-4514-a7b7-ba73703cd2e2\") " pod="default/test-local-path"
	Oct 18 09:01:24 addons-222746 kubelet[1298]: I1018 09:01:24.922476    1298 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13b752bf-d294-411c-bd81-c7ed27eaee4a" path="/var/lib/kubelet/pods/13b752bf-d294-411c-bd81-c7ed27eaee4a/volumes"
	
	
	==> storage-provisioner [d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a] <==
	W1018 09:01:00.475637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:02.478109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:02.481743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:04.484620       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:04.488951       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:06.491859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:06.496710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:08.499341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:08.503722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:10.506855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:10.511425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:12.514306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:12.519647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:14.522709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:14.526092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:16.529054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:16.532480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:18.534877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:18.538319       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:20.541672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:20.545296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:22.548155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:22.551655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:24.555411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:01:24.560847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-222746 -n addons-222746
helpers_test.go:269: (dbg) Run:  kubectl --context addons-222746 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: test-local-path ingress-nginx-admission-create-2kfb4 ingress-nginx-admission-patch-5jjnn registry-creds-764b6fb674-pmfcj
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-222746 describe pod test-local-path ingress-nginx-admission-create-2kfb4 ingress-nginx-admission-patch-5jjnn registry-creds-764b6fb674-pmfcj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-222746 describe pod test-local-path ingress-nginx-admission-create-2kfb4 ingress-nginx-admission-patch-5jjnn registry-creds-764b6fb674-pmfcj: exit status 1 (68.453458ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-222746/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 09:01:24 +0000
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.25
	IPs:
	  IP:  10.244.0.25
	Containers:
	  busybox:
	    Container ID:  cri-o://1f3877ab559b704ab4cbec63909c436066de22b54810b80adf423346a4b627ae
	    Image:         busybox:stable
	    Image ID:      docker.io/library/busybox@sha256:1fcf5df59121b92d61e066df1788e8df0cc35623f5d62d9679a41e163b6a0cdb
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 18 Oct 2025 09:01:26 +0000
	      Finished:     Sat, 18 Oct 2025 09:01:26 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hcvtb (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-hcvtb:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/test-local-path to addons-222746
	  Normal  Pulling    2s    kubelet            Pulling image "busybox:stable"
	  Normal  Pulled     0s    kubelet            Successfully pulled image "busybox:stable" in 1.524s (1.524s including waiting). Image size: 4670414 bytes.
	  Normal  Created    0s    kubelet            Created container: busybox
	  Normal  Started    0s    kubelet            Started container busybox

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2kfb4" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5jjnn" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-pmfcj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-222746 describe pod test-local-path ingress-nginx-admission-create-2kfb4 ingress-nginx-admission-patch-5jjnn registry-creds-764b6fb674-pmfcj: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-222746 addons disable headlamp --alsologtostderr -v=1: exit status 11 (224.468286ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:01:26.517109  145729 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:01:26.517404  145729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:26.517415  145729 out.go:374] Setting ErrFile to fd 2...
	I1018 09:01:26.517422  145729 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:26.517626  145729 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:01:26.517934  145729 mustload.go:65] Loading cluster: addons-222746
	I1018 09:01:26.518300  145729 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:26.518319  145729 addons.go:606] checking whether the cluster is paused
	I1018 09:01:26.518418  145729 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:26.518434  145729 host.go:66] Checking if "addons-222746" exists ...
	I1018 09:01:26.518796  145729 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 09:01:26.536167  145729 ssh_runner.go:195] Run: systemctl --version
	I1018 09:01:26.536217  145729 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 09:01:26.552733  145729 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 09:01:26.647277  145729 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:01:26.647351  145729 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:01:26.675630  145729 cri.go:89] found id: "3f1e0ab974c3adb0e7bcba7193dbfcbd5250e283b2c1dcb0f52f3542be969c05"
	I1018 09:01:26.675653  145729 cri.go:89] found id: "79c91ae766bdce515a4eda632ca6ecd5c57a4454b7a97ae58fa3e91e32020d28"
	I1018 09:01:26.675657  145729 cri.go:89] found id: "2006d0829aa9898dfa1eea55359c71e06640d1be7b32092f97e7ec9b89f3ea71"
	I1018 09:01:26.675660  145729 cri.go:89] found id: "da6e806e056d472d4816ed9122b9d99d62c91c582363ba61d34c584d97fda56c"
	I1018 09:01:26.675662  145729 cri.go:89] found id: "a4fd53616bc11869d8845646b763db80942a4e2c50ec48595c4bfdcc5950f898"
	I1018 09:01:26.675665  145729 cri.go:89] found id: "edce1d10c783fda9fcbe60cf570e174a37959936ebce9285eb63101e393cd691"
	I1018 09:01:26.675668  145729 cri.go:89] found id: "cc9e7bafa8a6c0370a3968660871e38749909a319d5191d1d16f707458abcfca"
	I1018 09:01:26.675670  145729 cri.go:89] found id: "0f74c115de3ce217c6a4a1e4f5ee891722f4b28a8e3da1f42cb6f42b7b6a7f23"
	I1018 09:01:26.675673  145729 cri.go:89] found id: "1267812961fa39d2e5a477c40f29fdca6aefbcd66ca083b2b5ab8083ed1f114d"
	I1018 09:01:26.675678  145729 cri.go:89] found id: "1c12fcfd58686f7ebc0bf26390f7f9977bab5450b5c08604e1773c936b620473"
	I1018 09:01:26.675681  145729 cri.go:89] found id: "13543c0f3dca2e5d1884741fc895c7bdd56c7ac061eaaba1fc193a2068c33a5d"
	I1018 09:01:26.675683  145729 cri.go:89] found id: "4bf2327b6d921b2a03b0ad6117b7e519262aa09b660a7aeed6481d1dd4d135be"
	I1018 09:01:26.675686  145729 cri.go:89] found id: "3c83994993aa0d26323e700488ed01d676cb6d662f3d4e2d1dcf544bab3ade13"
	I1018 09:01:26.675689  145729 cri.go:89] found id: "430460fa55c7751bc85693cb537d90a449762549c9ce77554e35e334382aa271"
	I1018 09:01:26.675694  145729 cri.go:89] found id: "910f4bbb59848023f2078cd2cfea1447cb68096d3009bcbbe3a3c1320b6f8dde"
	I1018 09:01:26.675700  145729 cri.go:89] found id: "ebae0d10fe53d742d91554d1abb6efb1544b7edb5ced01f3a8a801486ad51fab"
	I1018 09:01:26.675704  145729 cri.go:89] found id: "703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e"
	I1018 09:01:26.675715  145729 cri.go:89] found id: "d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a"
	I1018 09:01:26.675720  145729 cri.go:89] found id: "d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c"
	I1018 09:01:26.675724  145729 cri.go:89] found id: "2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733"
	I1018 09:01:26.675728  145729 cri.go:89] found id: "976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb"
	I1018 09:01:26.675732  145729 cri.go:89] found id: "c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711"
	I1018 09:01:26.675736  145729 cri.go:89] found id: "fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778"
	I1018 09:01:26.675746  145729 cri.go:89] found id: "179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4"
	I1018 09:01:26.675753  145729 cri.go:89] found id: ""
	I1018 09:01:26.675791  145729 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:01:26.688978  145729 out.go:203] 
	W1018 09:01:26.690034  145729 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:01:26.690054  145729 out.go:285] * 
	* 
	W1018 09:01:26.693359  145729 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:01:26.694485  145729 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-222746 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.51s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-s6s56" [5559e159-57c5-466e-af8a-014a85dc25bf] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003210165s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-222746 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (230.886756ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:01:29.233936  146124 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:01:29.234260  146124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:29.234272  146124 out.go:374] Setting ErrFile to fd 2...
	I1018 09:01:29.234279  146124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:29.234500  146124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:01:29.234772  146124 mustload.go:65] Loading cluster: addons-222746
	I1018 09:01:29.235145  146124 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:29.235161  146124 addons.go:606] checking whether the cluster is paused
	I1018 09:01:29.235242  146124 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:29.235254  146124 host.go:66] Checking if "addons-222746" exists ...
	I1018 09:01:29.235616  146124 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 09:01:29.253275  146124 ssh_runner.go:195] Run: systemctl --version
	I1018 09:01:29.253322  146124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 09:01:29.272127  146124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 09:01:29.366403  146124 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:01:29.366480  146124 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:01:29.395814  146124 cri.go:89] found id: "3f1e0ab974c3adb0e7bcba7193dbfcbd5250e283b2c1dcb0f52f3542be969c05"
	I1018 09:01:29.395874  146124 cri.go:89] found id: "79c91ae766bdce515a4eda632ca6ecd5c57a4454b7a97ae58fa3e91e32020d28"
	I1018 09:01:29.395880  146124 cri.go:89] found id: "2006d0829aa9898dfa1eea55359c71e06640d1be7b32092f97e7ec9b89f3ea71"
	I1018 09:01:29.395885  146124 cri.go:89] found id: "da6e806e056d472d4816ed9122b9d99d62c91c582363ba61d34c584d97fda56c"
	I1018 09:01:29.395889  146124 cri.go:89] found id: "a4fd53616bc11869d8845646b763db80942a4e2c50ec48595c4bfdcc5950f898"
	I1018 09:01:29.395895  146124 cri.go:89] found id: "edce1d10c783fda9fcbe60cf570e174a37959936ebce9285eb63101e393cd691"
	I1018 09:01:29.395899  146124 cri.go:89] found id: "cc9e7bafa8a6c0370a3968660871e38749909a319d5191d1d16f707458abcfca"
	I1018 09:01:29.395903  146124 cri.go:89] found id: "0f74c115de3ce217c6a4a1e4f5ee891722f4b28a8e3da1f42cb6f42b7b6a7f23"
	I1018 09:01:29.395907  146124 cri.go:89] found id: "1267812961fa39d2e5a477c40f29fdca6aefbcd66ca083b2b5ab8083ed1f114d"
	I1018 09:01:29.395922  146124 cri.go:89] found id: "1c12fcfd58686f7ebc0bf26390f7f9977bab5450b5c08604e1773c936b620473"
	I1018 09:01:29.395931  146124 cri.go:89] found id: "13543c0f3dca2e5d1884741fc895c7bdd56c7ac061eaaba1fc193a2068c33a5d"
	I1018 09:01:29.395936  146124 cri.go:89] found id: "4bf2327b6d921b2a03b0ad6117b7e519262aa09b660a7aeed6481d1dd4d135be"
	I1018 09:01:29.395943  146124 cri.go:89] found id: "3c83994993aa0d26323e700488ed01d676cb6d662f3d4e2d1dcf544bab3ade13"
	I1018 09:01:29.395947  146124 cri.go:89] found id: "430460fa55c7751bc85693cb537d90a449762549c9ce77554e35e334382aa271"
	I1018 09:01:29.395954  146124 cri.go:89] found id: "910f4bbb59848023f2078cd2cfea1447cb68096d3009bcbbe3a3c1320b6f8dde"
	I1018 09:01:29.395988  146124 cri.go:89] found id: "ebae0d10fe53d742d91554d1abb6efb1544b7edb5ced01f3a8a801486ad51fab"
	I1018 09:01:29.395995  146124 cri.go:89] found id: "703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e"
	I1018 09:01:29.396000  146124 cri.go:89] found id: "d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a"
	I1018 09:01:29.396003  146124 cri.go:89] found id: "d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c"
	I1018 09:01:29.396005  146124 cri.go:89] found id: "2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733"
	I1018 09:01:29.396008  146124 cri.go:89] found id: "976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb"
	I1018 09:01:29.396010  146124 cri.go:89] found id: "c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711"
	I1018 09:01:29.396012  146124 cri.go:89] found id: "fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778"
	I1018 09:01:29.396015  146124 cri.go:89] found id: "179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4"
	I1018 09:01:29.396017  146124 cri.go:89] found id: ""
	I1018 09:01:29.396066  146124 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:01:29.411652  146124 out.go:203] 
	W1018 09:01:29.413070  146124 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:01:29.413094  146124 out.go:285] * 
	* 
	W1018 09:01:29.417145  146124 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:01:29.418497  146124 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-222746 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.24s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-222746 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-222746 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-222746 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [a9734dff-ca59-4514-a7b7-ba73703cd2e2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [a9734dff-ca59-4514-a7b7-ba73703cd2e2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [a9734dff-ca59-4514-a7b7-ba73703cd2e2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00360858s
addons_test.go:967: (dbg) Run:  kubectl --context addons-222746 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 ssh "cat /opt/local-path-provisioner/pvc-ac0e00a7-7476-4965-b255-10439b12d9d4_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-222746 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-222746 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-222746 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (240.749054ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:01:28.813656  146005 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:01:28.814019  146005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:28.814033  146005 out.go:374] Setting ErrFile to fd 2...
	I1018 09:01:28.814039  146005 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:28.814350  146005 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:01:28.814674  146005 mustload.go:65] Loading cluster: addons-222746
	I1018 09:01:28.815097  146005 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:28.815117  146005 addons.go:606] checking whether the cluster is paused
	I1018 09:01:28.815206  146005 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:28.815220  146005 host.go:66] Checking if "addons-222746" exists ...
	I1018 09:01:28.815552  146005 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 09:01:28.835059  146005 ssh_runner.go:195] Run: systemctl --version
	I1018 09:01:28.835110  146005 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 09:01:28.853166  146005 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 09:01:28.950014  146005 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:01:28.950100  146005 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:01:28.982901  146005 cri.go:89] found id: "3f1e0ab974c3adb0e7bcba7193dbfcbd5250e283b2c1dcb0f52f3542be969c05"
	I1018 09:01:28.982920  146005 cri.go:89] found id: "79c91ae766bdce515a4eda632ca6ecd5c57a4454b7a97ae58fa3e91e32020d28"
	I1018 09:01:28.982924  146005 cri.go:89] found id: "2006d0829aa9898dfa1eea55359c71e06640d1be7b32092f97e7ec9b89f3ea71"
	I1018 09:01:28.982927  146005 cri.go:89] found id: "da6e806e056d472d4816ed9122b9d99d62c91c582363ba61d34c584d97fda56c"
	I1018 09:01:28.982929  146005 cri.go:89] found id: "a4fd53616bc11869d8845646b763db80942a4e2c50ec48595c4bfdcc5950f898"
	I1018 09:01:28.982932  146005 cri.go:89] found id: "edce1d10c783fda9fcbe60cf570e174a37959936ebce9285eb63101e393cd691"
	I1018 09:01:28.982934  146005 cri.go:89] found id: "cc9e7bafa8a6c0370a3968660871e38749909a319d5191d1d16f707458abcfca"
	I1018 09:01:28.982937  146005 cri.go:89] found id: "0f74c115de3ce217c6a4a1e4f5ee891722f4b28a8e3da1f42cb6f42b7b6a7f23"
	I1018 09:01:28.982939  146005 cri.go:89] found id: "1267812961fa39d2e5a477c40f29fdca6aefbcd66ca083b2b5ab8083ed1f114d"
	I1018 09:01:28.982962  146005 cri.go:89] found id: "1c12fcfd58686f7ebc0bf26390f7f9977bab5450b5c08604e1773c936b620473"
	I1018 09:01:28.982965  146005 cri.go:89] found id: "13543c0f3dca2e5d1884741fc895c7bdd56c7ac061eaaba1fc193a2068c33a5d"
	I1018 09:01:28.982968  146005 cri.go:89] found id: "4bf2327b6d921b2a03b0ad6117b7e519262aa09b660a7aeed6481d1dd4d135be"
	I1018 09:01:28.982970  146005 cri.go:89] found id: "3c83994993aa0d26323e700488ed01d676cb6d662f3d4e2d1dcf544bab3ade13"
	I1018 09:01:28.982973  146005 cri.go:89] found id: "430460fa55c7751bc85693cb537d90a449762549c9ce77554e35e334382aa271"
	I1018 09:01:28.982976  146005 cri.go:89] found id: "910f4bbb59848023f2078cd2cfea1447cb68096d3009bcbbe3a3c1320b6f8dde"
	I1018 09:01:28.982984  146005 cri.go:89] found id: "ebae0d10fe53d742d91554d1abb6efb1544b7edb5ced01f3a8a801486ad51fab"
	I1018 09:01:28.982992  146005 cri.go:89] found id: "703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e"
	I1018 09:01:28.982997  146005 cri.go:89] found id: "d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a"
	I1018 09:01:28.983001  146005 cri.go:89] found id: "d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c"
	I1018 09:01:28.983005  146005 cri.go:89] found id: "2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733"
	I1018 09:01:28.983012  146005 cri.go:89] found id: "976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb"
	I1018 09:01:28.983020  146005 cri.go:89] found id: "c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711"
	I1018 09:01:28.983026  146005 cri.go:89] found id: "fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778"
	I1018 09:01:28.983033  146005 cri.go:89] found id: "179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4"
	I1018 09:01:28.983036  146005 cri.go:89] found id: ""
	I1018 09:01:28.983074  146005 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:01:28.997735  146005 out.go:203] 
	W1018 09:01:28.999219  146005 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:01:28.999242  146005 out.go:285] * 
	* 
	W1018 09:01:29.003675  146005 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:01:29.006948  146005 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-222746 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-bmgjg" [2fef3cca-f68a-4cf3-9e0c-1d408f959feb] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.002916938s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-222746 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (241.88946ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:01:23.987816  144837 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:01:23.988018  144837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:23.988030  144837 out.go:374] Setting ErrFile to fd 2...
	I1018 09:01:23.988037  144837 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:23.988270  144837 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:01:23.988524  144837 mustload.go:65] Loading cluster: addons-222746
	I1018 09:01:23.988870  144837 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:23.988886  144837 addons.go:606] checking whether the cluster is paused
	I1018 09:01:23.988990  144837 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:23.989001  144837 host.go:66] Checking if "addons-222746" exists ...
	I1018 09:01:23.989371  144837 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 09:01:24.007475  144837 ssh_runner.go:195] Run: systemctl --version
	I1018 09:01:24.007538  144837 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 09:01:24.026967  144837 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 09:01:24.122589  144837 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:01:24.122671  144837 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:01:24.153243  144837 cri.go:89] found id: "3f1e0ab974c3adb0e7bcba7193dbfcbd5250e283b2c1dcb0f52f3542be969c05"
	I1018 09:01:24.153272  144837 cri.go:89] found id: "79c91ae766bdce515a4eda632ca6ecd5c57a4454b7a97ae58fa3e91e32020d28"
	I1018 09:01:24.153278  144837 cri.go:89] found id: "2006d0829aa9898dfa1eea55359c71e06640d1be7b32092f97e7ec9b89f3ea71"
	I1018 09:01:24.153286  144837 cri.go:89] found id: "da6e806e056d472d4816ed9122b9d99d62c91c582363ba61d34c584d97fda56c"
	I1018 09:01:24.153289  144837 cri.go:89] found id: "a4fd53616bc11869d8845646b763db80942a4e2c50ec48595c4bfdcc5950f898"
	I1018 09:01:24.153293  144837 cri.go:89] found id: "edce1d10c783fda9fcbe60cf570e174a37959936ebce9285eb63101e393cd691"
	I1018 09:01:24.153296  144837 cri.go:89] found id: "cc9e7bafa8a6c0370a3968660871e38749909a319d5191d1d16f707458abcfca"
	I1018 09:01:24.153300  144837 cri.go:89] found id: "0f74c115de3ce217c6a4a1e4f5ee891722f4b28a8e3da1f42cb6f42b7b6a7f23"
	I1018 09:01:24.153304  144837 cri.go:89] found id: "1267812961fa39d2e5a477c40f29fdca6aefbcd66ca083b2b5ab8083ed1f114d"
	I1018 09:01:24.153318  144837 cri.go:89] found id: "1c12fcfd58686f7ebc0bf26390f7f9977bab5450b5c08604e1773c936b620473"
	I1018 09:01:24.153323  144837 cri.go:89] found id: "13543c0f3dca2e5d1884741fc895c7bdd56c7ac061eaaba1fc193a2068c33a5d"
	I1018 09:01:24.153327  144837 cri.go:89] found id: "4bf2327b6d921b2a03b0ad6117b7e519262aa09b660a7aeed6481d1dd4d135be"
	I1018 09:01:24.153331  144837 cri.go:89] found id: "3c83994993aa0d26323e700488ed01d676cb6d662f3d4e2d1dcf544bab3ade13"
	I1018 09:01:24.153335  144837 cri.go:89] found id: "430460fa55c7751bc85693cb537d90a449762549c9ce77554e35e334382aa271"
	I1018 09:01:24.153339  144837 cri.go:89] found id: "910f4bbb59848023f2078cd2cfea1447cb68096d3009bcbbe3a3c1320b6f8dde"
	I1018 09:01:24.153346  144837 cri.go:89] found id: "ebae0d10fe53d742d91554d1abb6efb1544b7edb5ced01f3a8a801486ad51fab"
	I1018 09:01:24.153354  144837 cri.go:89] found id: "703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e"
	I1018 09:01:24.153359  144837 cri.go:89] found id: "d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a"
	I1018 09:01:24.153363  144837 cri.go:89] found id: "d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c"
	I1018 09:01:24.153367  144837 cri.go:89] found id: "2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733"
	I1018 09:01:24.153374  144837 cri.go:89] found id: "976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb"
	I1018 09:01:24.153377  144837 cri.go:89] found id: "c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711"
	I1018 09:01:24.153379  144837 cri.go:89] found id: "fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778"
	I1018 09:01:24.153382  144837 cri.go:89] found id: "179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4"
	I1018 09:01:24.153389  144837 cri.go:89] found id: ""
	I1018 09:01:24.153430  144837 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:01:24.168192  144837 out.go:203] 
	W1018 09:01:24.171372  144837 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:01:24.171401  144837 out.go:285] * 
	* 
	W1018 09:01:24.176461  144837 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:01:24.178249  144837 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-222746 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.23s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-2vdz9" [d92c3046-1257-4786-b38b-55b9f2867ec3] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002823169s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-222746 addons disable yakd --alsologtostderr -v=1: exit status 11 (226.136088ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:01:35.060668  146520 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:01:35.061046  146520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:35.061061  146520 out.go:374] Setting ErrFile to fd 2...
	I1018 09:01:35.061067  146520 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:35.061342  146520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:01:35.061660  146520 mustload.go:65] Loading cluster: addons-222746
	I1018 09:01:35.062117  146520 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:35.062139  146520 addons.go:606] checking whether the cluster is paused
	I1018 09:01:35.062270  146520 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:35.062285  146520 host.go:66] Checking if "addons-222746" exists ...
	I1018 09:01:35.062662  146520 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 09:01:35.079777  146520 ssh_runner.go:195] Run: systemctl --version
	I1018 09:01:35.079863  146520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 09:01:35.097355  146520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 09:01:35.191763  146520 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:01:35.191861  146520 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:01:35.221254  146520 cri.go:89] found id: "3f1e0ab974c3adb0e7bcba7193dbfcbd5250e283b2c1dcb0f52f3542be969c05"
	I1018 09:01:35.221284  146520 cri.go:89] found id: "79c91ae766bdce515a4eda632ca6ecd5c57a4454b7a97ae58fa3e91e32020d28"
	I1018 09:01:35.221288  146520 cri.go:89] found id: "2006d0829aa9898dfa1eea55359c71e06640d1be7b32092f97e7ec9b89f3ea71"
	I1018 09:01:35.221292  146520 cri.go:89] found id: "da6e806e056d472d4816ed9122b9d99d62c91c582363ba61d34c584d97fda56c"
	I1018 09:01:35.221295  146520 cri.go:89] found id: "a4fd53616bc11869d8845646b763db80942a4e2c50ec48595c4bfdcc5950f898"
	I1018 09:01:35.221299  146520 cri.go:89] found id: "edce1d10c783fda9fcbe60cf570e174a37959936ebce9285eb63101e393cd691"
	I1018 09:01:35.221301  146520 cri.go:89] found id: "cc9e7bafa8a6c0370a3968660871e38749909a319d5191d1d16f707458abcfca"
	I1018 09:01:35.221304  146520 cri.go:89] found id: "0f74c115de3ce217c6a4a1e4f5ee891722f4b28a8e3da1f42cb6f42b7b6a7f23"
	I1018 09:01:35.221307  146520 cri.go:89] found id: "1267812961fa39d2e5a477c40f29fdca6aefbcd66ca083b2b5ab8083ed1f114d"
	I1018 09:01:35.221319  146520 cri.go:89] found id: "1c12fcfd58686f7ebc0bf26390f7f9977bab5450b5c08604e1773c936b620473"
	I1018 09:01:35.221322  146520 cri.go:89] found id: "13543c0f3dca2e5d1884741fc895c7bdd56c7ac061eaaba1fc193a2068c33a5d"
	I1018 09:01:35.221324  146520 cri.go:89] found id: "4bf2327b6d921b2a03b0ad6117b7e519262aa09b660a7aeed6481d1dd4d135be"
	I1018 09:01:35.221326  146520 cri.go:89] found id: "3c83994993aa0d26323e700488ed01d676cb6d662f3d4e2d1dcf544bab3ade13"
	I1018 09:01:35.221329  146520 cri.go:89] found id: "430460fa55c7751bc85693cb537d90a449762549c9ce77554e35e334382aa271"
	I1018 09:01:35.221331  146520 cri.go:89] found id: "910f4bbb59848023f2078cd2cfea1447cb68096d3009bcbbe3a3c1320b6f8dde"
	I1018 09:01:35.221343  146520 cri.go:89] found id: "ebae0d10fe53d742d91554d1abb6efb1544b7edb5ced01f3a8a801486ad51fab"
	I1018 09:01:35.221350  146520 cri.go:89] found id: "703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e"
	I1018 09:01:35.221354  146520 cri.go:89] found id: "d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a"
	I1018 09:01:35.221356  146520 cri.go:89] found id: "d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c"
	I1018 09:01:35.221359  146520 cri.go:89] found id: "2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733"
	I1018 09:01:35.221361  146520 cri.go:89] found id: "976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb"
	I1018 09:01:35.221363  146520 cri.go:89] found id: "c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711"
	I1018 09:01:35.221365  146520 cri.go:89] found id: "fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778"
	I1018 09:01:35.221368  146520 cri.go:89] found id: "179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4"
	I1018 09:01:35.221370  146520 cri.go:89] found id: ""
	I1018 09:01:35.221418  146520 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:01:35.234938  146520 out.go:203] 
	W1018 09:01:35.236032  146520 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:01:35.236048  146520 out.go:285] * 
	* 
	W1018 09:01:35.238973  146520 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:01:35.240109  146520 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-222746 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.23s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-mcrsn" [24d32363-f3e5-407a-8e70-71806f45792c] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.002652921s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-222746 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-222746 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (250.775607ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:01:23.987812  144838 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:01:23.988026  144838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:23.988034  144838 out.go:374] Setting ErrFile to fd 2...
	I1018 09:01:23.988040  144838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:01:23.988267  144838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:01:23.988524  144838 mustload.go:65] Loading cluster: addons-222746
	I1018 09:01:23.988872  144838 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:23.988886  144838 addons.go:606] checking whether the cluster is paused
	I1018 09:01:23.988985  144838 config.go:182] Loaded profile config "addons-222746": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:01:23.989004  144838 host.go:66] Checking if "addons-222746" exists ...
	I1018 09:01:23.989371  144838 cli_runner.go:164] Run: docker container inspect addons-222746 --format={{.State.Status}}
	I1018 09:01:24.007109  144838 ssh_runner.go:195] Run: systemctl --version
	I1018 09:01:24.007172  144838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-222746
	I1018 09:01:24.026856  144838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32888 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/addons-222746/id_rsa Username:docker}
	I1018 09:01:24.122613  144838 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:01:24.122702  144838 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:01:24.154905  144838 cri.go:89] found id: "3f1e0ab974c3adb0e7bcba7193dbfcbd5250e283b2c1dcb0f52f3542be969c05"
	I1018 09:01:24.154926  144838 cri.go:89] found id: "79c91ae766bdce515a4eda632ca6ecd5c57a4454b7a97ae58fa3e91e32020d28"
	I1018 09:01:24.154931  144838 cri.go:89] found id: "2006d0829aa9898dfa1eea55359c71e06640d1be7b32092f97e7ec9b89f3ea71"
	I1018 09:01:24.154936  144838 cri.go:89] found id: "da6e806e056d472d4816ed9122b9d99d62c91c582363ba61d34c584d97fda56c"
	I1018 09:01:24.154940  144838 cri.go:89] found id: "a4fd53616bc11869d8845646b763db80942a4e2c50ec48595c4bfdcc5950f898"
	I1018 09:01:24.154945  144838 cri.go:89] found id: "edce1d10c783fda9fcbe60cf570e174a37959936ebce9285eb63101e393cd691"
	I1018 09:01:24.154949  144838 cri.go:89] found id: "cc9e7bafa8a6c0370a3968660871e38749909a319d5191d1d16f707458abcfca"
	I1018 09:01:24.154953  144838 cri.go:89] found id: "0f74c115de3ce217c6a4a1e4f5ee891722f4b28a8e3da1f42cb6f42b7b6a7f23"
	I1018 09:01:24.154957  144838 cri.go:89] found id: "1267812961fa39d2e5a477c40f29fdca6aefbcd66ca083b2b5ab8083ed1f114d"
	I1018 09:01:24.154964  144838 cri.go:89] found id: "1c12fcfd58686f7ebc0bf26390f7f9977bab5450b5c08604e1773c936b620473"
	I1018 09:01:24.154968  144838 cri.go:89] found id: "13543c0f3dca2e5d1884741fc895c7bdd56c7ac061eaaba1fc193a2068c33a5d"
	I1018 09:01:24.154972  144838 cri.go:89] found id: "4bf2327b6d921b2a03b0ad6117b7e519262aa09b660a7aeed6481d1dd4d135be"
	I1018 09:01:24.154978  144838 cri.go:89] found id: "3c83994993aa0d26323e700488ed01d676cb6d662f3d4e2d1dcf544bab3ade13"
	I1018 09:01:24.154980  144838 cri.go:89] found id: "430460fa55c7751bc85693cb537d90a449762549c9ce77554e35e334382aa271"
	I1018 09:01:24.154984  144838 cri.go:89] found id: "910f4bbb59848023f2078cd2cfea1447cb68096d3009bcbbe3a3c1320b6f8dde"
	I1018 09:01:24.154994  144838 cri.go:89] found id: "ebae0d10fe53d742d91554d1abb6efb1544b7edb5ced01f3a8a801486ad51fab"
	I1018 09:01:24.154999  144838 cri.go:89] found id: "703f4d898ac52da091473ef98a4001c145bd5d673318095a7ac9cf90bdbc7c9e"
	I1018 09:01:24.155019  144838 cri.go:89] found id: "d058db45cb84217243407b88e552f1784748e39a35567ea0e5247d5882252c1a"
	I1018 09:01:24.155038  144838 cri.go:89] found id: "d3608cbd20f633a6764e2712e3ee854a9692057d984da4e2f5c3464738e4e90c"
	I1018 09:01:24.155044  144838 cri.go:89] found id: "2026a4d802754bed21eb7d93c37098305fc9c9bdbdcc88cdd05823c5ac50d733"
	I1018 09:01:24.155051  144838 cri.go:89] found id: "976f8ced94e7bdbe0b1dfb23c58f7eee730f0b224a2f3f365176dc7ffa94a7fb"
	I1018 09:01:24.155060  144838 cri.go:89] found id: "c2f5337233ca038130ade85c3fcc550ea5f3b3f71f90e0c38b5ae2678b3fe711"
	I1018 09:01:24.155064  144838 cri.go:89] found id: "fce4b4ac493ecd94e292705fbe09b9a365f25281c2165df1fbcacdbafda86778"
	I1018 09:01:24.155071  144838 cri.go:89] found id: "179aeead4dbf5293d57a9a1825fbaa5a9655630efd6b2ac5da64714ae30e9ea4"
	I1018 09:01:24.155075  144838 cri.go:89] found id: ""
	I1018 09:01:24.155118  144838 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:01:24.170502  144838 out.go:203] 
	W1018 09:01:24.174960  144838 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:01:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:01:24.174979  144838 out.go:285] * 
	* 
	W1018 09:01:24.179895  144838 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:01:24.185538  144838 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-222746 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-622052 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-622052 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-2qzhf" [d2c2dbef-6d8c-47c2-ac02-3544d0afd569] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-622052 -n functional-622052
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-18 09:17:39.35724151 +0000 UTC m=+1179.065422904
functional_test.go:1645: (dbg) Run:  kubectl --context functional-622052 describe po hello-node-connect-7d85dfc575-2qzhf -n default
functional_test.go:1645: (dbg) kubectl --context functional-622052 describe po hello-node-connect-7d85dfc575-2qzhf -n default:
Name:             hello-node-connect-7d85dfc575-2qzhf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-622052/192.168.49.2
Start Time:       Sat, 18 Oct 2025 09:07:38 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lh6w4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lh6w4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-2qzhf to functional-622052
Normal   Pulling    6m56s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m56s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m56s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m46s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-622052 logs hello-node-connect-7d85dfc575-2qzhf -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-622052 logs hello-node-connect-7d85dfc575-2qzhf -n default: exit status 1 (60.351829ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-2qzhf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-622052 logs hello-node-connect-7d85dfc575-2qzhf -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-622052 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-2qzhf
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-622052/192.168.49.2
Start Time:       Sat, 18 Oct 2025 09:07:38 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lh6w4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lh6w4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-2qzhf to functional-622052
Normal   Pulling    6m56s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m56s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m56s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m46s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-622052 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-622052 logs -l app=hello-node-connect: exit status 1 (61.387103ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-2qzhf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-622052 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-622052 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.157.223
IPs:                      10.99.157.223
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32718/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-622052
helpers_test.go:243: (dbg) docker inspect functional-622052:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37d94dc89fe7161247feded14905f21776fc6521d3f3a06f8a3c491e940ff249",
	        "Created": "2025-10-18T09:05:18.537064276Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 158509,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:05:18.568733745Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/37d94dc89fe7161247feded14905f21776fc6521d3f3a06f8a3c491e940ff249/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37d94dc89fe7161247feded14905f21776fc6521d3f3a06f8a3c491e940ff249/hostname",
	        "HostsPath": "/var/lib/docker/containers/37d94dc89fe7161247feded14905f21776fc6521d3f3a06f8a3c491e940ff249/hosts",
	        "LogPath": "/var/lib/docker/containers/37d94dc89fe7161247feded14905f21776fc6521d3f3a06f8a3c491e940ff249/37d94dc89fe7161247feded14905f21776fc6521d3f3a06f8a3c491e940ff249-json.log",
	        "Name": "/functional-622052",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-622052:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-622052",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37d94dc89fe7161247feded14905f21776fc6521d3f3a06f8a3c491e940ff249",
	                "LowerDir": "/var/lib/docker/overlay2/1766193211ddcf0ea5b88623f196a5a17900480d98516a9258a991d7a2ab7d6c-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1766193211ddcf0ea5b88623f196a5a17900480d98516a9258a991d7a2ab7d6c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1766193211ddcf0ea5b88623f196a5a17900480d98516a9258a991d7a2ab7d6c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1766193211ddcf0ea5b88623f196a5a17900480d98516a9258a991d7a2ab7d6c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-622052",
	                "Source": "/var/lib/docker/volumes/functional-622052/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-622052",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-622052",
	                "name.minikube.sigs.k8s.io": "functional-622052",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "55c5377ce97d731b86469714c1de9d6f41587e051e461444e3df7e32da5cc6d8",
	            "SandboxKey": "/var/run/docker/netns/55c5377ce97d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32898"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32899"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32902"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32900"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32901"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-622052": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:da:a4:c7:87:10",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ff779ac3cab2059f7edd4446c3c70db7fcc8599f21362b83cc35ea8fc3362a35",
	                    "EndpointID": "cf5116e465f56b0c05ef01e17510b7d0a6c9c4db536d7cbe66c2a446e3b85847",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-622052",
	                        "37d94dc89fe7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-622052 -n functional-622052
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-622052 logs -n 25: (1.256566367s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-622052 ssh -- ls -la /mount-9p                                                                          │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:07 UTC │ 18 Oct 25 09:07 UTC │
	│ ssh            │ functional-622052 ssh sudo umount -f /mount-9p                                                                     │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:07 UTC │                     │
	│ mount          │ -p functional-622052 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1256507331/001:/mount3 --alsologtostderr -v=1 │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:07 UTC │                     │
	│ mount          │ -p functional-622052 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1256507331/001:/mount1 --alsologtostderr -v=1 │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:07 UTC │                     │
	│ mount          │ -p functional-622052 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1256507331/001:/mount2 --alsologtostderr -v=1 │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:07 UTC │                     │
	│ ssh            │ functional-622052 ssh findmnt -T /mount1                                                                           │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:07 UTC │                     │
	│ ssh            │ functional-622052 ssh findmnt -T /mount1                                                                           │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:07 UTC │ 18 Oct 25 09:07 UTC │
	│ ssh            │ functional-622052 ssh findmnt -T /mount2                                                                           │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:07 UTC │ 18 Oct 25 09:07 UTC │
	│ ssh            │ functional-622052 ssh findmnt -T /mount3                                                                           │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:07 UTC │ 18 Oct 25 09:07 UTC │
	│ mount          │ -p functional-622052 --kill=true                                                                                   │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:07 UTC │                     │
	│ image          │ functional-622052 image ls --format short --alsologtostderr                                                        │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:07 UTC │ 18 Oct 25 09:08 UTC │
	│ image          │ functional-622052 image ls --format yaml --alsologtostderr                                                         │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:08 UTC │ 18 Oct 25 09:08 UTC │
	│ ssh            │ functional-622052 ssh pgrep buildkitd                                                                              │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:08 UTC │                     │
	│ image          │ functional-622052 image build -t localhost/my-image:functional-622052 testdata/build --alsologtostderr             │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:08 UTC │ 18 Oct 25 09:08 UTC │
	│ image          │ functional-622052 image ls                                                                                         │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:08 UTC │ 18 Oct 25 09:08 UTC │
	│ image          │ functional-622052 image ls --format json --alsologtostderr                                                         │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:08 UTC │ 18 Oct 25 09:08 UTC │
	│ image          │ functional-622052 image ls --format table --alsologtostderr                                                        │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:08 UTC │ 18 Oct 25 09:08 UTC │
	│ update-context │ functional-622052 update-context --alsologtostderr -v=2                                                            │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:08 UTC │ 18 Oct 25 09:08 UTC │
	│ update-context │ functional-622052 update-context --alsologtostderr -v=2                                                            │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:08 UTC │ 18 Oct 25 09:08 UTC │
	│ update-context │ functional-622052 update-context --alsologtostderr -v=2                                                            │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:08 UTC │ 18 Oct 25 09:08 UTC │
	│ service        │ functional-622052 service list                                                                                     │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ service        │ functional-622052 service list -o json                                                                             │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │ 18 Oct 25 09:17 UTC │
	│ service        │ functional-622052 service --namespace=default --https --url hello-node                                             │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ service        │ functional-622052 service hello-node --url --format={{.IP}}                                                        │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	│ service        │ functional-622052 service hello-node --url                                                                         │ functional-622052 │ jenkins │ v1.37.0 │ 18 Oct 25 09:17 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:07:26
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:07:26.733927  167340 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:07:26.734485  167340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:07:26.734501  167340 out.go:374] Setting ErrFile to fd 2...
	I1018 09:07:26.734507  167340 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:07:26.736360  167340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:07:26.737099  167340 out.go:368] Setting JSON to false
	I1018 09:07:26.738441  167340 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2991,"bootTime":1760775456,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:07:26.738563  167340 start.go:141] virtualization: kvm guest
	I1018 09:07:26.740349  167340 out.go:179] * [functional-622052] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:07:26.741724  167340 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:07:26.741740  167340 notify.go:220] Checking for updates...
	I1018 09:07:26.747244  167340 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:07:26.748368  167340 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:07:26.749434  167340 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:07:26.750610  167340 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:07:26.751830  167340 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:07:26.753623  167340 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:07:26.754373  167340 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:07:26.784284  167340 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:07:26.784394  167340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:07:26.847674  167340 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-18 09:07:26.837420363 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:07:26.847840  167340 docker.go:318] overlay module found
	I1018 09:07:26.849535  167340 out.go:179] * Using the docker driver based on existing profile
	I1018 09:07:26.850974  167340 start.go:305] selected driver: docker
	I1018 09:07:26.850994  167340 start.go:925] validating driver "docker" against &{Name:functional-622052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-622052 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:07:26.851078  167340 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:07:26.851163  167340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:07:26.915500  167340 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-18 09:07:26.90349882 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:07:26.916515  167340 cni.go:84] Creating CNI manager for ""
	I1018 09:07:26.916594  167340 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:07:26.916654  167340 start.go:349] cluster config:
	{Name:functional-622052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-622052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:07:26.918482  167340 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 18 09:07:53 functional-622052 crio[3560]: time="2025-10-18T09:07:53.729290511Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=8cb8bbea-e05c-4d96-b04a-33dde108a7ee name=/runtime.v1.ImageService/PullImage
	Oct 18 09:07:53 functional-622052 crio[3560]: time="2025-10-18T09:07:53.730766508Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Oct 18 09:08:00 functional-622052 crio[3560]: time="2025-10-18T09:08:00.280246733Z" level=info msg="Pulled image: docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da" id=8cb8bbea-e05c-4d96-b04a-33dde108a7ee name=/runtime.v1.ImageService/PullImage
	Oct 18 09:08:00 functional-622052 crio[3560]: time="2025-10-18T09:08:00.28090765Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=7e9debe5-e7d2-4e7a-98ba-c233628bb0a4 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:08:00 functional-622052 crio[3560]: time="2025-10-18T09:08:00.282947612Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=6110b73e-e53a-4ae3-9129-0b412c7ea2b0 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:08:00 functional-622052 crio[3560]: time="2025-10-18T09:08:00.288509101Z" level=info msg="Creating container: default/mysql-5bb876957f-w9gsr/mysql" id=9581c4b1-654e-47f7-a1ba-2af0527d13cf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:08:00 functional-622052 crio[3560]: time="2025-10-18T09:08:00.29021823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:08:00 functional-622052 crio[3560]: time="2025-10-18T09:08:00.295270939Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:08:00 functional-622052 crio[3560]: time="2025-10-18T09:08:00.295947328Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:08:00 functional-622052 crio[3560]: time="2025-10-18T09:08:00.328532332Z" level=info msg="Created container f0ceccf085984c3ff6f505c83e0014a9dffb3b60d4520e069dcad11b48ac9bdc: default/mysql-5bb876957f-w9gsr/mysql" id=9581c4b1-654e-47f7-a1ba-2af0527d13cf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:08:00 functional-622052 crio[3560]: time="2025-10-18T09:08:00.329217218Z" level=info msg="Starting container: f0ceccf085984c3ff6f505c83e0014a9dffb3b60d4520e069dcad11b48ac9bdc" id=de73b288-b74f-41c2-8f9f-761168bcc6ed name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:08:00 functional-622052 crio[3560]: time="2025-10-18T09:08:00.33135434Z" level=info msg="Started container" PID=7631 containerID=f0ceccf085984c3ff6f505c83e0014a9dffb3b60d4520e069dcad11b48ac9bdc description=default/mysql-5bb876957f-w9gsr/mysql id=de73b288-b74f-41c2-8f9f-761168bcc6ed name=/runtime.v1.RuntimeService/StartContainer sandboxID=38ff83cae4e76db596bd56250c94b8b4c9cf437bdc41fa5763d03dfa47536a36
	Oct 18 09:08:02 functional-622052 crio[3560]: time="2025-10-18T09:08:02.762463497Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=00c34ee1-ce5b-4fa7-9159-b9250ce72180 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:08:17 functional-622052 crio[3560]: time="2025-10-18T09:08:17.762643842Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0ef09a85-3bce-4704-b671-626a9498155a name=/runtime.v1.ImageService/PullImage
	Oct 18 09:08:41 functional-622052 crio[3560]: time="2025-10-18T09:08:41.760176253Z" level=info msg="Stopping pod sandbox: 14e237a38c0f03f845531d5274dbed0b082a42b8c056dc13aee6eab6be049d0e" id=c60f4515-e656-49dc-800e-508c71b11346 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 09:08:41 functional-622052 crio[3560]: time="2025-10-18T09:08:41.760235892Z" level=info msg="Stopped pod sandbox (already stopped): 14e237a38c0f03f845531d5274dbed0b082a42b8c056dc13aee6eab6be049d0e" id=c60f4515-e656-49dc-800e-508c71b11346 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 18 09:08:41 functional-622052 crio[3560]: time="2025-10-18T09:08:41.760543372Z" level=info msg="Removing pod sandbox: 14e237a38c0f03f845531d5274dbed0b082a42b8c056dc13aee6eab6be049d0e" id=a4e7784c-c249-489c-aa6c-8bbc675046fe name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 09:08:41 functional-622052 crio[3560]: time="2025-10-18T09:08:41.763640348Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:08:41 functional-622052 crio[3560]: time="2025-10-18T09:08:41.763696492Z" level=info msg="Removed pod sandbox: 14e237a38c0f03f845531d5274dbed0b082a42b8c056dc13aee6eab6be049d0e" id=a4e7784c-c249-489c-aa6c-8bbc675046fe name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 18 09:08:50 functional-622052 crio[3560]: time="2025-10-18T09:08:50.762878174Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=fc532e25-dc95-4379-8e4f-d92cf79eee79 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:09:12 functional-622052 crio[3560]: time="2025-10-18T09:09:12.762490481Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=744cc8ff-d0b1-4e6f-a399-428de68eacc2 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:10:22 functional-622052 crio[3560]: time="2025-10-18T09:10:22.762636298Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7e37a5f9-0d13-4258-b11a-58307710899c name=/runtime.v1.ImageService/PullImage
	Oct 18 09:10:43 functional-622052 crio[3560]: time="2025-10-18T09:10:43.762289131Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=0323ba4a-ecc9-4300-8a21-7586f88fa997 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:13:04 functional-622052 crio[3560]: time="2025-10-18T09:13:04.762390883Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b009b6dc-cd9c-4f8a-8eb1-c35cdad44b6d name=/runtime.v1.ImageService/PullImage
	Oct 18 09:13:32 functional-622052 crio[3560]: time="2025-10-18T09:13:32.762508389Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=9bc06817-92f1-4a64-aecd-1249136c44b8 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f0ceccf085984       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   38ff83cae4e76       mysql-5bb876957f-w9gsr                       default
	84f77c5958da0       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   85f9015b4a4c9       busybox-mount                                default
	03ae650c15ead       docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115                  9 minutes ago       Running             myfrontend                  0                   ea4c9676ac967       sp-pod                                       default
	d204ed8f3b120       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                  10 minutes ago      Running             nginx                       0                   945952fa7da95       nginx-svc                                    default
	be5a6144999d9       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   8e71071ac5e6c       dashboard-metrics-scraper-77bf4d6c4c-5ztl5   kubernetes-dashboard
	68bf413266bd6       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         10 minutes ago      Running             kubernetes-dashboard        0                   24db87f581e27       kubernetes-dashboard-855c9754f9-n6jz6        kubernetes-dashboard
	14751f45b4cb8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     3                   8b2c434746b79       kube-controller-manager-functional-622052    kube-system
	97200ca394e33       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              2                   13b1219bab0a8       kube-apiserver-functional-622052             kube-system
	18554b8dc47ae       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Exited              kube-apiserver              1                   13b1219bab0a8       kube-apiserver-functional-622052             kube-system
	3d1fb7b5868a9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Exited              kube-controller-manager     2                   8b2c434746b79       kube-controller-manager-functional-622052    kube-system
	cf587557bfc6c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Running             etcd                        1                   105715d26a085       etcd-functional-622052                       kube-system
	ff033a8859850       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Running             kube-scheduler              1                   2b28a8cf6baa7       kube-scheduler-functional-622052             kube-system
	59f724e8f20c9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   99e9b2ddd8e90       kindnet-vfxbc                                kube-system
	32c6164cff3a1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   9be5508c3e39f       coredns-66bc5c9577-tdzdn                     kube-system
	95c0f8c98377e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Running             storage-provisioner         1                   e0676a10e77d5       storage-provisioner                          kube-system
	58cb717f8e60b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   188ec005a295b       kube-proxy-6tr6k                             kube-system
	ecc09a42b1a96       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   9be5508c3e39f       coredns-66bc5c9577-tdzdn                     kube-system
	21f5c159b2ec7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   e0676a10e77d5       storage-provisioner                          kube-system
	407e14ddb1c59       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   99e9b2ddd8e90       kindnet-vfxbc                                kube-system
	4ab730805172d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   188ec005a295b       kube-proxy-6tr6k                             kube-system
	2e3ee1dcafada       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 12 minutes ago      Exited              etcd                        0                   105715d26a085       etcd-functional-622052                       kube-system
	5601d4c012151       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 12 minutes ago      Exited              kube-scheduler              0                   2b28a8cf6baa7       kube-scheduler-functional-622052             kube-system
	
	
	==> coredns [32c6164cff3a1a652a4c7429ef288ef56e948ffd834c2a1598828accfe981e95] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53356 - 59473 "HINFO IN 5922623102808738224.1645839871927490671. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022213223s
	
	
	==> coredns [ecc09a42b1a9614988fb0623ee9bdb9060ba82e6655d49b499df5b3464d6f0d9] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47425 - 39211 "HINFO IN 2618544017024567374.5750114447012162191. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021530243s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-622052
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-622052
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=functional-622052
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_05_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:05:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-622052
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:17:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:17:05 +0000   Sat, 18 Oct 2025 09:05:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:17:05 +0000   Sat, 18 Oct 2025 09:05:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:17:05 +0000   Sat, 18 Oct 2025 09:05:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:17:05 +0000   Sat, 18 Oct 2025 09:06:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-622052
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                475c7faf-8f79-4e03-abc8-3ef4ebd953ff
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-xxpnx                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-2qzhf           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-w9gsr                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m47s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 coredns-66bc5c9577-tdzdn                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-622052                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-vfxbc                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-622052              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-622052     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-6tr6k                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-622052              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-5ztl5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-n6jz6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-622052 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-622052 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-622052 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-622052 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-622052 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-622052 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-622052 event: Registered Node functional-622052 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-622052 status is now: NodeReady
	  Normal  NodeNotReady             11m                kubelet          Node functional-622052 status is now: NodeNotReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m                kubelet          Node functional-622052 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                kubelet          Node functional-622052 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                kubelet          Node functional-622052 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-622052 event: Registered Node functional-622052 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [2e3ee1dcafada3385555002298a7a3d0dd9516257f4c094274a8b3e1304d3dab] <==
	{"level":"warn","ts":"2025-10-18T09:05:31.699765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:05:31.706894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:05:31.720662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:05:31.731341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:05:31.738424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:05:31.745455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:05:31.795552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50944","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:06:22.412674Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-18T09:06:22.412766Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-622052","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-18T09:06:22.412887Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T09:06:29.414939Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-18T09:06:29.415064Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T09:06:29.415104Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-18T09:06:29.415169Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-18T09:06:29.415137Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-18T09:06:29.415189Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-18T09:06:29.415191Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-18T09:06:29.415195Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T09:06:29.415212Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-18T09:06:29.415216Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-18T09:06:29.415231Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T09:06:29.417413Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-18T09:06:29.417464Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-18T09:06:29.417493Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-18T09:06:29.417506Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-622052","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [cf587557bfc6cfcf8adc30dfc32ea698728484f9aeffd0a3b1b0dd00b895779a] <==
	{"level":"warn","ts":"2025-10-18T09:07:00.911676Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:00.917689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:00.923563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:00.929522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:00.935722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:00.942201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:00.955564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:00.959139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:00.966111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:00.972014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:00.977735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:00.983716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:00.996434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:01.002437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:01.009209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:01.015167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:01.021003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:01.027312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:01.041017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:01.047004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:01.054419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:07:01.097458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50812","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:17:00.638683Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1166}
	{"level":"info","ts":"2025-10-18T09:17:00.661177Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1166,"took":"22.139517ms","hash":2762924384,"current-db-size-bytes":3473408,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1630208,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-10-18T09:17:00.661231Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2762924384,"revision":1166,"compact-revision":-1}
	
	
	==> kernel <==
	 09:17:40 up  1:00,  0 user,  load average: 0.19, 0.24, 0.61
	Linux functional-622052 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [407e14ddb1c59468a5edeb9b4a4aff10c9ead1413f0d1515df49d67dba1332d1] <==
	I1018 09:05:41.023285       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:05:41.023549       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1018 09:05:41.023699       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:05:41.023715       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:05:41.023734       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:05:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:05:41.233627       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:05:41.233745       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:05:41.233786       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:05:41.233936       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:05:41.634586       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:05:41.634621       1 metrics.go:72] Registering metrics
	I1018 09:05:41.634708       1 controller.go:711] "Syncing nftables rules"
	I1018 09:05:51.234582       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:05:51.234653       1 main.go:301] handling current node
	I1018 09:06:01.241486       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:06:01.241528       1 main.go:301] handling current node
	I1018 09:06:11.238384       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:06:11.238439       1 main.go:301] handling current node
	I1018 09:06:21.237899       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:06:21.237935       1 main.go:301] handling current node
	
	
	==> kindnet [59f724e8f20c9bff41a1ae9b5476f932c0d2315fec33f98dc69175a3c815cbf4] <==
	I1018 09:15:33.346948       1 main.go:301] handling current node
	I1018 09:15:43.338932       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:15:43.338976       1 main.go:301] handling current node
	I1018 09:15:53.339088       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:15:53.339127       1 main.go:301] handling current node
	I1018 09:16:03.346006       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:16:03.346039       1 main.go:301] handling current node
	I1018 09:16:13.344972       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:16:13.345011       1 main.go:301] handling current node
	I1018 09:16:23.338961       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:16:23.338994       1 main.go:301] handling current node
	I1018 09:16:33.343547       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:16:33.343585       1 main.go:301] handling current node
	I1018 09:16:43.346919       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:16:43.346950       1 main.go:301] handling current node
	I1018 09:16:53.343385       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:16:53.343415       1 main.go:301] handling current node
	I1018 09:17:03.340120       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:17:03.340150       1 main.go:301] handling current node
	I1018 09:17:13.338941       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:17:13.338977       1 main.go:301] handling current node
	I1018 09:17:23.342101       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:17:23.342135       1 main.go:301] handling current node
	I1018 09:17:33.344906       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1018 09:17:33.344940       1 main.go:301] handling current node
	
	
	==> kube-apiserver [18554b8dc47aeab234a9dddc7cd5bd447a61927f4fa042aa1c54757a82221a58] <==
	I1018 09:06:42.893855       1 options.go:263] external host was not specified, using 192.168.49.2
	I1018 09:06:42.895897       1 server.go:150] Version: v1.34.1
	I1018 09:06:42.895927       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1018 09:06:42.896220       1 run.go:72] "command failed" err="failed to create listener: failed to listen on 0.0.0.0:8441: listen tcp 0.0.0.0:8441: bind: address already in use"
	
	
	==> kube-apiserver [97200ca394e3324204b9a3eeff5de02ca23677926c362204154cab60bed862d3] <==
	I1018 09:07:01.575238       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:07:01.605176       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:07:02.469713       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1018 09:07:02.673561       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1018 09:07:02.674804       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:07:02.679201       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:07:08.758078       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:07:10.223265       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:07:21.303337       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.110.21.252"}
	I1018 09:07:26.017280       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:07:26.122397       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.214.93"}
	I1018 09:07:27.855723       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:07:27.901979       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:07:27.914087       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:07:27.962200       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.228.115"}
	I1018 09:07:27.977437       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.99.97"}
	I1018 09:07:34.997573       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.108.33.6"}
	I1018 09:07:39.048488       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.157.223"}
	E1018 09:07:44.531358       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:44444: use of closed network connection
	E1018 09:07:53.216731       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:36602: use of closed network connection
	I1018 09:07:53.343965       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.16.201"}
	E1018 09:08:07.473133       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:36552: use of closed network connection
	E1018 09:08:08.644370       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:36574: use of closed network connection
	E1018 09:08:10.611477       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:57800: use of closed network connection
	I1018 09:17:01.505284       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [14751f45b4cb8515133cc7ce50c90796c450d9c3e4f3d6d5a21127d9b94aa680] <==
	I1018 09:07:10.119598       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:07:10.119608       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:07:10.119700       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:07:10.119792       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-622052"
	I1018 09:07:10.119863       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 09:07:10.120031       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:07:10.120169       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:07:10.120384       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:07:10.121435       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:07:10.121987       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:07:10.124475       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:07:10.125710       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:07:10.125789       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:07:10.127850       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:07:10.130030       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:07:10.131196       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:07:10.132644       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 09:07:10.145977       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1018 09:07:27.902775       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 09:07:27.909655       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 09:07:27.912498       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 09:07:27.913923       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 09:07:27.916301       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 09:07:27.921105       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1018 09:07:27.923433       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [3d1fb7b5868a9219fed9a45d20775a3e0e5126376a30d7bcb182dd873784ceff] <==
	I1018 09:06:43.733448       1 shared_informer.go:349] "Waiting for caches to sync" controller="cronjob"
	I1018 09:06:43.751732       1 controllermanager.go:781] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I1018 09:06:43.751759       1 controllermanager.go:759] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1018 09:06:43.751915       1 shared_informer.go:349] "Waiting for caches to sync" controller="validatingadmissionpolicy-status"
	I1018 09:06:43.753874       1 controllermanager.go:781] "Started controller" controller="replicaset-controller"
	I1018 09:06:43.754070       1 replica_set.go:243] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1018 09:06:43.754088       1 shared_informer.go:349] "Waiting for caches to sync" controller="ReplicaSet"
	I1018 09:06:43.755721       1 controllermanager.go:781] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I1018 09:06:43.755813       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I1018 09:06:43.757892       1 controllermanager.go:781] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1018 09:06:43.757974       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I1018 09:06:43.757989       1 shared_informer.go:349] "Waiting for caches to sync" controller="PVC protection"
	I1018 09:06:43.759768       1 controllermanager.go:781] "Started controller" controller="resourceclaim-controller"
	I1018 09:06:43.759874       1 controller.go:397] "Starting resource claim controller" logger="resourceclaim-controller"
	I1018 09:06:43.759977       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource_claim"
	I1018 09:06:43.762114       1 controllermanager.go:781] "Started controller" controller="taint-eviction-controller"
	I1018 09:06:43.762151       1 taint_eviction.go:282] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I1018 09:06:43.762221       1 taint_eviction.go:288] "Sending events to api server" logger="taint-eviction-controller"
	I1018 09:06:43.762279       1 shared_informer.go:349] "Waiting for caches to sync" controller="taint-eviction-controller"
	I1018 09:06:43.764471       1 controllermanager.go:781] "Started controller" controller="endpointslice-controller"
	I1018 09:06:43.764694       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I1018 09:06:43.764725       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint_slice"
	I1018 09:06:43.818233       1 shared_informer.go:356] "Caches are synced" controller="tokens"
	E1018 09:06:45.787700       1 controllermanager.go:755] "Error starting controller" err="failed to discover resources: Get \"https://192.168.49.2:8441/api\": dial tcp 192.168.49.2:8441: connect: connection refused" controller="resourcequota-controller"
	E1018 09:06:45.787729       1 controllermanager.go:250] "Error starting controllers" err="failed to discover resources: Get \"https://192.168.49.2:8441/api\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [4ab730805172d812f5f2b615e727935978dba82d2615c8783711934d9eecd8ba] <==
	I1018 09:05:40.870778       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:05:40.936484       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:05:41.037452       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:05:41.037489       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1018 09:05:41.037561       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:05:41.055582       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:05:41.055638       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:05:41.061063       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:05:41.061379       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:05:41.061394       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:05:41.062556       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:05:41.062575       1 config.go:200] "Starting service config controller"
	I1018 09:05:41.062592       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:05:41.062599       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:05:41.062679       1 config.go:309] "Starting node config controller"
	I1018 09:05:41.062687       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:05:41.062694       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:05:41.062687       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:05:41.062731       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:05:41.162808       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:05:41.162868       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:05:41.162887       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [58cb717f8e60bde86c0fc52cec0fe812c3e62323a7125929967da7a17c6017bc] <==
	E1018 09:06:23.147692       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:06:23.165793       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:06:23.165852       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:06:23.171328       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:06:23.172543       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:06:23.173098       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:06:23.174759       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:06:23.174785       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:06:23.174796       1 config.go:200] "Starting service config controller"
	I1018 09:06:23.174810       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:06:23.174841       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:06:23.174863       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:06:23.174863       1 config.go:309] "Starting node config controller"
	I1018 09:06:23.174874       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:06:23.174881       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:06:23.275334       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:06:23.275374       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:06:23.275383       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	E1018 09:06:41.212310       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:06:41.213901       1 reflector.go:205] "Failed to watch" err="nodes \"functional-622052\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:06:41.213950       1 reflector.go:205] "Failed to watch" err="endpointslices.discovery.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"endpointslices\" in API group \"discovery.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.EndpointSlice"
	E1018 09:06:41.213981       1 reflector.go:205] "Failed to watch" err="servicecidrs.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot watch resource \"servicecidrs\" in API group \"networking.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:node-proxier\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ServiceCIDR"
	E1018 09:06:44.193661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-622052&resourceVersion=477\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:06:47.456546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-622052&resourceVersion=477\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:06:54.397375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-622052&resourceVersion=477\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-scheduler [5601d4c0121510f8fb22822a2feb591fced0fbf32c2427457091404de1907778] <==
	E1018 09:05:32.225605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:05:32.225636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:05:32.225700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:05:32.225721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:05:32.225723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:05:32.225758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:05:32.225732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:05:32.225778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:05:32.225791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:05:32.225899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:05:32.225920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:05:33.120239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:05:33.212057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:05:33.268250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:05:33.278341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 09:05:33.328750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:05:33.349731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:05:33.393843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1018 09:05:35.823645       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:06:29.521710       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:06:29.521850       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1018 09:06:29.521889       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1018 09:06:29.521896       1 server.go:265] "[graceful-termination] secure server is exiting"
	I1018 09:06:29.521712       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1018 09:06:29.521927       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ff033a8859850776a486386fa58216347c7523ee3e353398abc883d48fbc17c9] <==
	E1018 09:06:41.166220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:06:41.166231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:06:41.166303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:06:41.166347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:06:41.166352       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:06:41.166397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:06:41.166439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:06:41.166469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:06:41.192854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 09:06:41.192958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1018 09:06:41.413727       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1018 09:06:47.476571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:06:48.289629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:06:48.332948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:06:48.549680       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:06:49.099795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:06:50.022627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 09:06:51.501586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:06:51.556466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:06:52.055599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:06:52.522747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:06:52.825573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 09:06:53.395896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1018 09:07:06.314343       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 09:07:07.414223       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:15:01 functional-622052 kubelet[4312]: E1018 09:15:01.763364    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-2qzhf" podUID="d2c2dbef-6d8c-47c2-ac02-3544d0afd569"
	Oct 18 09:15:03 functional-622052 kubelet[4312]: E1018 09:15:03.762920    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpnx" podUID="cc37328f-8ab4-4115-ae68-fddc62f786c7"
	Oct 18 09:15:16 functional-622052 kubelet[4312]: E1018 09:15:16.762554    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-2qzhf" podUID="d2c2dbef-6d8c-47c2-ac02-3544d0afd569"
	Oct 18 09:15:17 functional-622052 kubelet[4312]: E1018 09:15:17.761870    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpnx" podUID="cc37328f-8ab4-4115-ae68-fddc62f786c7"
	Oct 18 09:15:29 functional-622052 kubelet[4312]: E1018 09:15:29.762689    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpnx" podUID="cc37328f-8ab4-4115-ae68-fddc62f786c7"
	Oct 18 09:15:30 functional-622052 kubelet[4312]: E1018 09:15:30.762009    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-2qzhf" podUID="d2c2dbef-6d8c-47c2-ac02-3544d0afd569"
	Oct 18 09:15:40 functional-622052 kubelet[4312]: E1018 09:15:40.762217    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpnx" podUID="cc37328f-8ab4-4115-ae68-fddc62f786c7"
	Oct 18 09:15:44 functional-622052 kubelet[4312]: E1018 09:15:44.761704    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-2qzhf" podUID="d2c2dbef-6d8c-47c2-ac02-3544d0afd569"
	Oct 18 09:15:54 functional-622052 kubelet[4312]: E1018 09:15:54.761910    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpnx" podUID="cc37328f-8ab4-4115-ae68-fddc62f786c7"
	Oct 18 09:15:58 functional-622052 kubelet[4312]: E1018 09:15:58.762807    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-2qzhf" podUID="d2c2dbef-6d8c-47c2-ac02-3544d0afd569"
	Oct 18 09:16:08 functional-622052 kubelet[4312]: E1018 09:16:08.762193    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpnx" podUID="cc37328f-8ab4-4115-ae68-fddc62f786c7"
	Oct 18 09:16:13 functional-622052 kubelet[4312]: E1018 09:16:13.762168    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-2qzhf" podUID="d2c2dbef-6d8c-47c2-ac02-3544d0afd569"
	Oct 18 09:16:23 functional-622052 kubelet[4312]: E1018 09:16:23.762520    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpnx" podUID="cc37328f-8ab4-4115-ae68-fddc62f786c7"
	Oct 18 09:16:28 functional-622052 kubelet[4312]: E1018 09:16:28.761727    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-2qzhf" podUID="d2c2dbef-6d8c-47c2-ac02-3544d0afd569"
	Oct 18 09:16:38 functional-622052 kubelet[4312]: E1018 09:16:38.762110    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpnx" podUID="cc37328f-8ab4-4115-ae68-fddc62f786c7"
	Oct 18 09:16:42 functional-622052 kubelet[4312]: E1018 09:16:42.762552    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-2qzhf" podUID="d2c2dbef-6d8c-47c2-ac02-3544d0afd569"
	Oct 18 09:16:51 functional-622052 kubelet[4312]: E1018 09:16:51.762516    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpnx" podUID="cc37328f-8ab4-4115-ae68-fddc62f786c7"
	Oct 18 09:16:53 functional-622052 kubelet[4312]: E1018 09:16:53.764039    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-2qzhf" podUID="d2c2dbef-6d8c-47c2-ac02-3544d0afd569"
	Oct 18 09:17:02 functional-622052 kubelet[4312]: E1018 09:17:02.762105    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpnx" podUID="cc37328f-8ab4-4115-ae68-fddc62f786c7"
	Oct 18 09:17:05 functional-622052 kubelet[4312]: E1018 09:17:05.762316    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-2qzhf" podUID="d2c2dbef-6d8c-47c2-ac02-3544d0afd569"
	Oct 18 09:17:14 functional-622052 kubelet[4312]: E1018 09:17:14.761805    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpnx" podUID="cc37328f-8ab4-4115-ae68-fddc62f786c7"
	Oct 18 09:17:19 functional-622052 kubelet[4312]: E1018 09:17:19.762641    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-2qzhf" podUID="d2c2dbef-6d8c-47c2-ac02-3544d0afd569"
	Oct 18 09:17:27 functional-622052 kubelet[4312]: E1018 09:17:27.762711    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpnx" podUID="cc37328f-8ab4-4115-ae68-fddc62f786c7"
	Oct 18 09:17:32 functional-622052 kubelet[4312]: E1018 09:17:32.762494    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-2qzhf" podUID="d2c2dbef-6d8c-47c2-ac02-3544d0afd569"
	Oct 18 09:17:40 functional-622052 kubelet[4312]: E1018 09:17:40.761721    4312 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xxpnx" podUID="cc37328f-8ab4-4115-ae68-fddc62f786c7"
	
	
	==> kubernetes-dashboard [68bf413266bd68431abd2e09e6538db6fc8f699519b3730db3ca309d50806686] <==
	2025/10/18 09:07:32 Using namespace: kubernetes-dashboard
	2025/10/18 09:07:32 Using in-cluster config to connect to apiserver
	2025/10/18 09:07:32 Using secret token for csrf signing
	2025/10/18 09:07:32 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:07:32 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:07:32 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:07:32 Generating JWE encryption key
	2025/10/18 09:07:32 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:07:32 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:07:32 Initializing JWE encryption key from synchronized object
	2025/10/18 09:07:32 Creating in-cluster Sidecar client
	2025/10/18 09:07:32 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:07:32 Serving insecurely on HTTP port: 9090
	2025/10/18 09:08:02 Successful request to sidecar
	2025/10/18 09:07:32 Starting overwatch
	
	
	==> storage-provisioner [21f5c159b2ec7347f63a9966babddb362281517320af773991e27950b6be410f] <==
	W1018 09:05:56.057650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:05:58.061187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:05:58.065460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:00.068743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:00.072123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:02.075869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:02.079375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:04.082232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:04.086167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:06.090001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:06.093505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:08.096463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:08.100004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:10.102979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:10.107988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:12.111071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:12.114918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:14.117860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:14.123234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:16.126233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:16.129851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:18.132740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:18.137255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:20.140048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:06:20.143428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [95c0f8c98377ee83721752b42c2f4dc4ca7e687414c11e516da1cdad3161c0bc] <==
	W1018 09:17:16.044842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:18.048328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:18.052148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:20.055404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:20.059406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:22.061926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:22.066540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:24.070497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:24.074781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:26.078017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:26.081635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:28.084969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:28.088580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:30.091242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:30.096432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:32.098910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:32.102993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:34.105985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:34.109614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:36.112220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:36.116023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:38.119177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:38.124098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:40.128065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:17:40.132512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-622052 -n functional-622052
helpers_test.go:269: (dbg) Run:  kubectl --context functional-622052 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-xxpnx hello-node-connect-7d85dfc575-2qzhf
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-622052 describe pod busybox-mount hello-node-75c85bcc94-xxpnx hello-node-connect-7d85dfc575-2qzhf
helpers_test.go:290: (dbg) kubectl --context functional-622052 describe pod busybox-mount hello-node-75c85bcc94-xxpnx hello-node-connect-7d85dfc575-2qzhf:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-622052/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 09:07:47 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  cri-o://84f77c5958da0a0a5568939f5105080ffa541499fe4e49ce42a7fead214ddd78
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 18 Oct 2025 09:07:50 +0000
	      Finished:     Sat, 18 Oct 2025 09:07:50 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jwm6r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-jwm6r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m54s  default-scheduler  Successfully assigned default/busybox-mount to functional-622052
	  Normal  Pulling    9m53s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m51s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.053s (2.053s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m51s  kubelet            Created container: mount-munger
	  Normal  Started    9m51s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-xxpnx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-622052/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 09:07:26 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-59hcl (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-59hcl:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-xxpnx to functional-622052
	  Normal   Pulling    7m19s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m19s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m19s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    14s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     14s (x42 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-2qzhf
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-622052/192.168.49.2
	Start Time:       Sat, 18 Oct 2025 09:07:38 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lh6w4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lh6w4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-2qzhf to functional-622052
	  Normal   Pulling    6m58s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m58s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m58s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     5m (x20 over 10m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m48s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-622052 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-622052 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-xxpnx" [cc37328f-8ab4-4115-ae68-fddc62f786c7] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-622052 -n functional-622052
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-18 09:17:26.436146706 +0000 UTC m=+1166.144328106
functional_test.go:1460: (dbg) Run:  kubectl --context functional-622052 describe po hello-node-75c85bcc94-xxpnx -n default
functional_test.go:1460: (dbg) kubectl --context functional-622052 describe po hello-node-75c85bcc94-xxpnx -n default:
Name:             hello-node-75c85bcc94-xxpnx
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-622052/192.168.49.2
Start Time:       Sat, 18 Oct 2025 09:07:26 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-59hcl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-59hcl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-xxpnx to functional-622052
Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     5m (x20 over 10m)     kubelet            Error: ImagePullBackOff
Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-622052 logs hello-node-75c85bcc94-xxpnx -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-622052 logs hello-node-75c85bcc94-xxpnx -n default: exit status 1 (68.591863ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-xxpnx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-622052 logs hello-node-75c85bcc94-xxpnx -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 image load --daemon kicbase/echo-server:functional-622052 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-622052" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 image load --daemon kicbase/echo-server:functional-622052 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-622052 image load --daemon kicbase/echo-server:functional-622052 --alsologtostderr: (1.560476888s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-622052" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
E1018 09:07:32.413317  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-622052
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 image load --daemon kicbase/echo-server:functional-622052 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-622052" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 image save kicbase/echo-server:functional-622052 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1018 09:07:34.940493  169718 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:07:34.940688  169718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:07:34.940702  169718 out.go:374] Setting ErrFile to fd 2...
	I1018 09:07:34.940709  169718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:07:34.941000  169718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:07:34.941886  169718 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:07:34.941983  169718 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:07:34.942390  169718 cli_runner.go:164] Run: docker container inspect functional-622052 --format={{.State.Status}}
	I1018 09:07:34.963296  169718 ssh_runner.go:195] Run: systemctl --version
	I1018 09:07:34.963370  169718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622052
	I1018 09:07:34.983254  169718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/functional-622052/id_rsa Username:docker}
	I1018 09:07:35.085268  169718 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1018 09:07:35.085333  169718 cache_images.go:254] Failed to load cached images for "functional-622052": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1018 09:07:35.085354  169718 cache_images.go:266] failed pushing to: functional-622052

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-622052
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 image save --daemon kicbase/echo-server:functional-622052 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-622052
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-622052: exit status 1 (21.322213ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-622052

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-622052

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622052 service --namespace=default --https --url hello-node: exit status 115 (524.505789ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30740
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-622052 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622052 service hello-node --url --format={{.IP}}: exit status 115 (520.560425ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-622052 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622052 service hello-node --url: exit status 115 (520.295935ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30740
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-622052 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30740
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.39s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-309581 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-309581 --output=json --user=testUser: exit status 80 (2.393678395s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2ce8a9ed-78fc-4e1d-98a4-1c4405708ee1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-309581 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"9e42fdbd-3767-4b6c-812a-63b9bfbc1777","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-18T09:27:06Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"43c2f44a-7cc2-40ff-9e20-cf4374a54500","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-309581 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.39s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.07s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-309581 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-309581 --output=json --user=testUser: exit status 80 (2.073228714s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a9b55c24-e96f-4895-8fe7-efffd9b57f47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-309581 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"a3fe860b-d456-4e46-9a80-412aecfd1400","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-18T09:27:08Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"5fbfe03c-b80d-4c82-9aae-0bca53cd2bd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-309581 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.07s)

                                                
                                    
x
+
TestPause/serial/Pause (6.62s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-238319 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-238319 --alsologtostderr -v=5: exit status 80 (2.529609397s)

                                                
                                                
-- stdout --
	* Pausing node pause-238319 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:41:32.442723  340287 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:41:32.443649  340287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:41:32.443689  340287 out.go:374] Setting ErrFile to fd 2...
	I1018 09:41:32.443706  340287 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:41:32.444020  340287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:41:32.444358  340287 out.go:368] Setting JSON to false
	I1018 09:41:32.444427  340287 mustload.go:65] Loading cluster: pause-238319
	I1018 09:41:32.444947  340287 config.go:182] Loaded profile config "pause-238319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:41:32.445506  340287 cli_runner.go:164] Run: docker container inspect pause-238319 --format={{.State.Status}}
	I1018 09:41:32.472689  340287 host.go:66] Checking if "pause-238319" exists ...
	I1018 09:41:32.473085  340287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:41:32.561221  340287 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-18 09:41:32.546581299 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:41:32.562140  340287 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-238319 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:41:32.564161  340287 out.go:179] * Pausing node pause-238319 ... 
	I1018 09:41:32.565301  340287 host.go:66] Checking if "pause-238319" exists ...
	I1018 09:41:32.565667  340287 ssh_runner.go:195] Run: systemctl --version
	I1018 09:41:32.565708  340287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:32.589056  340287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/pause-238319/id_rsa Username:docker}
	I1018 09:41:32.695054  340287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:41:32.709936  340287 pause.go:52] kubelet running: true
	I1018 09:41:32.710042  340287 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:41:32.893644  340287 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:41:32.893948  340287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:41:32.992246  340287 cri.go:89] found id: "cb50c5561b8a28159de66c6e421c73e788439a08f5d404a6e136e4b904504795"
	I1018 09:41:32.992272  340287 cri.go:89] found id: "fc0bb0d4fc4e65de13e98f313b9612efea7c1345f5805feb1a4a92b098766fe9"
	I1018 09:41:32.992278  340287 cri.go:89] found id: "a019d95fa3490ac09c3720d197c684b3dd34f3ddd19c0eed1e63fe48cccfb2df"
	I1018 09:41:32.992283  340287 cri.go:89] found id: "8074b8d8db125c1763e6634b51d91476f5f4341b4314ab0d19ee17e915081239"
	I1018 09:41:32.992287  340287 cri.go:89] found id: "ab8c2763e1457e03e4fe2887dc9efa7d087b973ed3f586c84497a6659af10703"
	I1018 09:41:32.992291  340287 cri.go:89] found id: "45e57534f6b2ee505cd92bbf9caca937d90fab1e893258907dd9b9d5c454863e"
	I1018 09:41:32.992295  340287 cri.go:89] found id: "be1d9fd168ccbc88a1ae4984ccaa690a5a284e7713511e3376e39c36c372f903"
	I1018 09:41:32.992299  340287 cri.go:89] found id: ""
	I1018 09:41:32.992344  340287 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:41:33.005509  340287 retry.go:31] will retry after 247.449653ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:41:33Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:41:33.254043  340287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:41:33.267853  340287 pause.go:52] kubelet running: false
	I1018 09:41:33.267923  340287 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:41:33.393718  340287 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:41:33.393864  340287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:41:33.470575  340287 cri.go:89] found id: "cb50c5561b8a28159de66c6e421c73e788439a08f5d404a6e136e4b904504795"
	I1018 09:41:33.470607  340287 cri.go:89] found id: "fc0bb0d4fc4e65de13e98f313b9612efea7c1345f5805feb1a4a92b098766fe9"
	I1018 09:41:33.470611  340287 cri.go:89] found id: "a019d95fa3490ac09c3720d197c684b3dd34f3ddd19c0eed1e63fe48cccfb2df"
	I1018 09:41:33.470615  340287 cri.go:89] found id: "8074b8d8db125c1763e6634b51d91476f5f4341b4314ab0d19ee17e915081239"
	I1018 09:41:33.470618  340287 cri.go:89] found id: "ab8c2763e1457e03e4fe2887dc9efa7d087b973ed3f586c84497a6659af10703"
	I1018 09:41:33.470621  340287 cri.go:89] found id: "45e57534f6b2ee505cd92bbf9caca937d90fab1e893258907dd9b9d5c454863e"
	I1018 09:41:33.470624  340287 cri.go:89] found id: "be1d9fd168ccbc88a1ae4984ccaa690a5a284e7713511e3376e39c36c372f903"
	I1018 09:41:33.470626  340287 cri.go:89] found id: ""
	I1018 09:41:33.470674  340287 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:41:33.482442  340287 retry.go:31] will retry after 385.046335ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:41:33Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:41:33.867853  340287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:41:33.881769  340287 pause.go:52] kubelet running: false
	I1018 09:41:33.881857  340287 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:41:34.016106  340287 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:41:34.016202  340287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:41:34.084916  340287 cri.go:89] found id: "cb50c5561b8a28159de66c6e421c73e788439a08f5d404a6e136e4b904504795"
	I1018 09:41:34.084942  340287 cri.go:89] found id: "fc0bb0d4fc4e65de13e98f313b9612efea7c1345f5805feb1a4a92b098766fe9"
	I1018 09:41:34.084949  340287 cri.go:89] found id: "a019d95fa3490ac09c3720d197c684b3dd34f3ddd19c0eed1e63fe48cccfb2df"
	I1018 09:41:34.084954  340287 cri.go:89] found id: "8074b8d8db125c1763e6634b51d91476f5f4341b4314ab0d19ee17e915081239"
	I1018 09:41:34.084957  340287 cri.go:89] found id: "ab8c2763e1457e03e4fe2887dc9efa7d087b973ed3f586c84497a6659af10703"
	I1018 09:41:34.084962  340287 cri.go:89] found id: "45e57534f6b2ee505cd92bbf9caca937d90fab1e893258907dd9b9d5c454863e"
	I1018 09:41:34.084967  340287 cri.go:89] found id: "be1d9fd168ccbc88a1ae4984ccaa690a5a284e7713511e3376e39c36c372f903"
	I1018 09:41:34.084972  340287 cri.go:89] found id: ""
	I1018 09:41:34.085035  340287 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:41:34.096784  340287 retry.go:31] will retry after 585.430593ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:41:34Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:41:34.682548  340287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:41:34.695580  340287 pause.go:52] kubelet running: false
	I1018 09:41:34.695649  340287 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:41:34.801557  340287 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:41:34.801662  340287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:41:34.875887  340287 cri.go:89] found id: "cb50c5561b8a28159de66c6e421c73e788439a08f5d404a6e136e4b904504795"
	I1018 09:41:34.875920  340287 cri.go:89] found id: "fc0bb0d4fc4e65de13e98f313b9612efea7c1345f5805feb1a4a92b098766fe9"
	I1018 09:41:34.875926  340287 cri.go:89] found id: "a019d95fa3490ac09c3720d197c684b3dd34f3ddd19c0eed1e63fe48cccfb2df"
	I1018 09:41:34.875931  340287 cri.go:89] found id: "8074b8d8db125c1763e6634b51d91476f5f4341b4314ab0d19ee17e915081239"
	I1018 09:41:34.875935  340287 cri.go:89] found id: "ab8c2763e1457e03e4fe2887dc9efa7d087b973ed3f586c84497a6659af10703"
	I1018 09:41:34.875939  340287 cri.go:89] found id: "45e57534f6b2ee505cd92bbf9caca937d90fab1e893258907dd9b9d5c454863e"
	I1018 09:41:34.875943  340287 cri.go:89] found id: "be1d9fd168ccbc88a1ae4984ccaa690a5a284e7713511e3376e39c36c372f903"
	I1018 09:41:34.875947  340287 cri.go:89] found id: ""
	I1018 09:41:34.875993  340287 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:41:34.892369  340287 out.go:203] 
	W1018 09:41:34.893744  340287 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:41:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:41:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:41:34.893770  340287 out.go:285] * 
	* 
	W1018 09:41:34.900782  340287 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:41:34.902324  340287 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-238319 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-238319
helpers_test.go:243: (dbg) docker inspect pause-238319:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f3bf6c5c8f72fb2eae5ff5731f6de5de108cae5fb7c04d3d9c452481fc185427",
	        "Created": "2025-10-18T09:40:48.429110456Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 321963,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:40:48.471150144Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/f3bf6c5c8f72fb2eae5ff5731f6de5de108cae5fb7c04d3d9c452481fc185427/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f3bf6c5c8f72fb2eae5ff5731f6de5de108cae5fb7c04d3d9c452481fc185427/hostname",
	        "HostsPath": "/var/lib/docker/containers/f3bf6c5c8f72fb2eae5ff5731f6de5de108cae5fb7c04d3d9c452481fc185427/hosts",
	        "LogPath": "/var/lib/docker/containers/f3bf6c5c8f72fb2eae5ff5731f6de5de108cae5fb7c04d3d9c452481fc185427/f3bf6c5c8f72fb2eae5ff5731f6de5de108cae5fb7c04d3d9c452481fc185427-json.log",
	        "Name": "/pause-238319",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-238319:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-238319",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f3bf6c5c8f72fb2eae5ff5731f6de5de108cae5fb7c04d3d9c452481fc185427",
	                "LowerDir": "/var/lib/docker/overlay2/3a3a72bf37eb08584605d16bb39c287756db9a8e55bc82ed0b5cdbdd7347c598-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a3a72bf37eb08584605d16bb39c287756db9a8e55bc82ed0b5cdbdd7347c598/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a3a72bf37eb08584605d16bb39c287756db9a8e55bc82ed0b5cdbdd7347c598/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a3a72bf37eb08584605d16bb39c287756db9a8e55bc82ed0b5cdbdd7347c598/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-238319",
	                "Source": "/var/lib/docker/volumes/pause-238319/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-238319",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-238319",
	                "name.minikube.sigs.k8s.io": "pause-238319",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "27de14e01e5994941bb9f5343bc2d852cd66eccfdfea74f1509be9e7b3876d7b",
	            "SandboxKey": "/var/run/docker/netns/27de14e01e59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-238319": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:b6:d7:55:8a:04",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc34b4af0845da6c802dc81c73f4b4277beaad88933c210bf42a502e8671cd1e",
	                    "EndpointID": "843886e85ecb6b6a98659ebcd94714e55c69b175e039798c71961a40a0c31534",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-238319",
	                        "f3bf6c5c8f72"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-238319 -n pause-238319
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-238319 -n pause-238319: exit status 2 (363.55986ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-238319 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-238319 logs -n 25: (1.089986002s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-345705 sudo cat /var/lib/kubelet/config.yaml                                                                      │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo systemctl status docker --all --full --no-pager                                                       │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo systemctl cat docker --no-pager                                                                       │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo cat /etc/docker/daemon.json                                                                           │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo docker system info                                                                                    │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo cri-dockerd --version                                                                                 │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo systemctl cat containerd --no-pager                                                                   │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo cat /etc/containerd/config.toml                                                                       │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo containerd config dump                                                                                │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo systemctl cat crio --no-pager                                                                         │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo crio config                                                                                           │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ delete  │ -p cilium-345705                                                                                                            │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p cert-expiration-650496 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-650496    │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ delete  │ -p running-upgrade-896586                                                                                                   │ running-upgrade-896586    │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p force-systemd-flag-565668 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-565668 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ start   │ -p pause-238319 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-238319              │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ pause   │ -p pause-238319 --alsologtostderr -v=5                                                                                      │ pause-238319              │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:41:24
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:41:24.288816  336575 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:41:24.289108  336575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:41:24.289121  336575 out.go:374] Setting ErrFile to fd 2...
	I1018 09:41:24.289129  336575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:41:24.289366  336575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:41:24.289927  336575 out.go:368] Setting JSON to false
	I1018 09:41:24.291235  336575 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5028,"bootTime":1760775456,"procs":291,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:41:24.291321  336575 start.go:141] virtualization: kvm guest
	I1018 09:41:24.321398  336575 out.go:179] * [pause-238319] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:41:24.326846  336575 notify.go:220] Checking for updates...
	I1018 09:41:24.326903  336575 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:41:24.379170  336575 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:41:24.419393  336575 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:41:24.561406  336575 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:41:20.200590  331569 cli_runner.go:164] Run: docker network inspect missing-upgrade-631894 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:41:20.217919  331569 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 09:41:20.221965  331569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:41:20.234148  331569 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1018 09:41:20.234195  331569 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:41:20.306478  331569 crio.go:496] all images are preloaded for cri-o runtime.
	I1018 09:41:20.306491  331569 crio.go:415] Images already preloaded, skipping extraction
	I1018 09:41:20.306535  331569 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:41:20.357447  331569 crio.go:496] all images are preloaded for cri-o runtime.
	I1018 09:41:20.357464  331569 cache_images.go:84] Images are preloaded, skipping loading
	I1018 09:41:20.357535  331569 ssh_runner.go:195] Run: crio config
	I1018 09:41:20.407577  331569 cni.go:84] Creating CNI manager for ""
	I1018 09:41:20.407594  331569 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:41:20.407620  331569 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1018 09:41:20.407646  331569 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-631894 NodeName:missing-upgrade-631894 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:41:20.407836  331569 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "missing-upgrade-631894"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:41:20.407920  331569 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=missing-upgrade-631894 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-631894 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1018 09:41:20.407982  331569 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1018 09:41:20.418178  331569 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:41:20.418244  331569 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:41:20.428100  331569 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1018 09:41:20.448021  331569 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:41:20.472305  331569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1018 09:41:20.494071  331569 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:41:20.498312  331569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:41:20.511495  331569 certs.go:56] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894 for IP: 192.168.94.2
	I1018 09:41:20.511542  331569 certs.go:190] acquiring lock for shared ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:20.511712  331569 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:41:20.511748  331569 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:41:20.511801  331569 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/client.key
	I1018 09:41:20.511817  331569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/client.crt with IP's: []
	I1018 09:41:20.696367  331569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/client.crt ...
	I1018 09:41:20.696386  331569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/client.crt: {Name:mk51bf5afbe904b78b9574c2fb9cadd5afabe338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:20.696586  331569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/client.key ...
	I1018 09:41:20.696612  331569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/client.key: {Name:mkedc03b8ae5cc6d524aeeda020e1557303aa579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:20.696747  331569 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.key.ad8e880a
	I1018 09:41:20.696764  331569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.crt.ad8e880a with IP's: [192.168.94.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1018 09:41:21.030819  331569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.crt.ad8e880a ...
	I1018 09:41:21.030858  331569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.crt.ad8e880a: {Name:mk7a2d75cdb4fca07c179be3f5b6d3b1671ef307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:21.031052  331569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.key.ad8e880a ...
	I1018 09:41:21.031068  331569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.key.ad8e880a: {Name:mk7c78659988893a4580797ea91a9a97127e2e5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:21.031170  331569 certs.go:337] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.crt.ad8e880a -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.crt
	I1018 09:41:21.031255  331569 certs.go:341] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.key.ad8e880a -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.key
	I1018 09:41:21.031317  331569 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/proxy-client.key
	I1018 09:41:21.031331  331569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/proxy-client.crt with IP's: []
	I1018 09:41:21.461630  331569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/proxy-client.crt ...
	I1018 09:41:21.461653  331569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/proxy-client.crt: {Name:mke8614aad72b5f639121966ef3fa66b60af1af9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:21.461817  331569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/proxy-client.key ...
	I1018 09:41:21.461849  331569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/proxy-client.key: {Name:mka98cfff82147a59349ca9ee298e41761c35c11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:21.462064  331569 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:41:21.462104  331569 certs.go:433] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:41:21.462117  331569 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:41:21.462150  331569 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:41:21.462271  331569 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:41:21.462337  331569 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:41:21.462384  331569 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:41:21.463109  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1018 09:41:21.490725  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:41:21.516775  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:41:21.547113  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:41:21.579327  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:41:21.607879  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:41:21.634263  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:41:21.660707  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:41:21.686408  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:41:21.716439  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:41:21.744025  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:41:21.769772  331569 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:41:21.788480  331569 ssh_runner.go:195] Run: openssl version
	I1018 09:41:21.794743  331569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:41:21.805844  331569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:41:21.809941  331569 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:41:21.809987  331569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:41:21.817167  331569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:41:21.828232  331569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:41:21.838890  331569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:21.843051  331569 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:21.843110  331569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:21.850177  331569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:41:21.860873  331569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:41:21.871434  331569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:41:21.875621  331569 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:41:21.875676  331569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:41:21.882682  331569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:41:21.893020  331569 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1018 09:41:21.896854  331569 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1018 09:41:21.896923  331569 kubeadm.go:404] StartCluster: {Name:missing-upgrade-631894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-631894 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1018 09:41:21.897107  331569 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:41:21.897165  331569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:41:21.934386  331569 cri.go:89] found id: ""
	I1018 09:41:21.934455  331569 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:41:21.944153  331569 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:41:21.953812  331569 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:41:21.953878  331569 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:41:21.963251  331569 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:41:21.963299  331569 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:41:22.049049  331569 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:41:22.121306  331569 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:41:24.935365  336575 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:41:25.155096  336575 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:41:25.189539  336575 config.go:182] Loaded profile config "pause-238319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:41:25.190207  336575 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:41:25.218158  336575 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:41:25.218247  336575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:41:25.276799  336575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:82 SystemTime:2025-10-18 09:41:25.266869533 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:41:25.276928  336575 docker.go:318] overlay module found
	I1018 09:41:25.458534  336575 out.go:179] * Using the docker driver based on existing profile
	I1018 09:41:25.496958  336575 start.go:305] selected driver: docker
	I1018 09:41:25.496984  336575 start.go:925] validating driver "docker" against &{Name:pause-238319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-238319 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:41:25.497140  336575 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:41:25.497226  336575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:41:25.549690  336575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:82 SystemTime:2025-10-18 09:41:25.540665834 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:41:25.550556  336575 cni.go:84] Creating CNI manager for ""
	I1018 09:41:25.550626  336575 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:41:25.550683  336575 start.go:349] cluster config:
	{Name:pause-238319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-238319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:41:20.780280  335228 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 09:41:20.780515  335228 start.go:159] libmachine.API.Create for "force-systemd-flag-565668" (driver="docker")
	I1018 09:41:20.780612  335228 client.go:168] LocalClient.Create starting
	I1018 09:41:20.780694  335228 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem
	I1018 09:41:20.780736  335228 main.go:141] libmachine: Decoding PEM data...
	I1018 09:41:20.780763  335228 main.go:141] libmachine: Parsing certificate...
	I1018 09:41:20.780862  335228 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem
	I1018 09:41:20.780896  335228 main.go:141] libmachine: Decoding PEM data...
	I1018 09:41:20.780913  335228 main.go:141] libmachine: Parsing certificate...
	I1018 09:41:20.781243  335228 cli_runner.go:164] Run: docker network inspect force-systemd-flag-565668 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:41:20.800540  335228 cli_runner.go:211] docker network inspect force-systemd-flag-565668 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:41:20.800623  335228 network_create.go:284] running [docker network inspect force-systemd-flag-565668] to gather additional debugging logs...
	I1018 09:41:20.800650  335228 cli_runner.go:164] Run: docker network inspect force-systemd-flag-565668
	W1018 09:41:20.825202  335228 cli_runner.go:211] docker network inspect force-systemd-flag-565668 returned with exit code 1
	I1018 09:41:20.825245  335228 network_create.go:287] error running [docker network inspect force-systemd-flag-565668]: docker network inspect force-systemd-flag-565668: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-565668 not found
	I1018 09:41:20.825263  335228 network_create.go:289] output of [docker network inspect force-systemd-flag-565668]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-565668 not found
	
	** /stderr **
	I1018 09:41:20.825419  335228 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:41:20.849770  335228 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-feaba2b58bf0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:21:45:2e:39:fd} reservation:<nil>}
	I1018 09:41:20.850277  335228 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-159189dc4cae IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:66:7d:df:93:8a:95} reservation:<nil>}
	I1018 09:41:20.850905  335228 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-34d26817ecdb IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:18:1e:90:91:d0} reservation:<nil>}
	I1018 09:41:20.851471  335228 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cc34b4af0845 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e2:10:92:51:72:61} reservation:<nil>}
	I1018 09:41:20.852311  335228 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cb7d90}
	I1018 09:41:20.852333  335228 network_create.go:124] attempt to create docker network force-systemd-flag-565668 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 09:41:20.852376  335228 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-565668 force-systemd-flag-565668
	I1018 09:41:20.935790  335228 network_create.go:108] docker network force-systemd-flag-565668 192.168.85.0/24 created
	I1018 09:41:20.935852  335228 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-565668" container
	I1018 09:41:20.935932  335228 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:41:20.958263  335228 cli_runner.go:164] Run: docker volume create force-systemd-flag-565668 --label name.minikube.sigs.k8s.io=force-systemd-flag-565668 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:41:20.977754  335228 oci.go:103] Successfully created a docker volume force-systemd-flag-565668
	I1018 09:41:20.977842  335228 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-565668-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-565668 --entrypoint /usr/bin/test -v force-systemd-flag-565668:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:41:21.396338  335228 oci.go:107] Successfully prepared a docker volume force-systemd-flag-565668
	I1018 09:41:21.396397  335228 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:41:21.396437  335228 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 09:41:21.396517  335228 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-565668:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 09:41:25.592507  336575 out.go:179] * Starting "pause-238319" primary control-plane node in "pause-238319" cluster
	I1018 09:41:25.595848  336575 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:41:25.624994  336575 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:41:25.857412  336575 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:41:25.857418  336575 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:41:25.857493  336575 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:41:25.857505  336575 cache.go:58] Caching tarball of preloaded images
	I1018 09:41:25.857607  336575 preload.go:233] Found /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:41:25.857624  336575 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:41:25.857781  336575 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/config.json ...
	I1018 09:41:25.877715  336575 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:41:25.877739  336575 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:41:25.877758  336575 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:41:25.877788  336575 start.go:360] acquireMachinesLock for pause-238319: {Name:mkcd41232403b5a8a9e87ba238de3b17794afc29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:41:25.877871  336575 start.go:364] duration metric: took 58.249µs to acquireMachinesLock for "pause-238319"
	I1018 09:41:25.877896  336575 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:41:25.877906  336575 fix.go:54] fixHost starting: 
	I1018 09:41:25.878131  336575 cli_runner.go:164] Run: docker container inspect pause-238319 --format={{.State.Status}}
	I1018 09:41:25.896685  336575 fix.go:112] recreateIfNeeded on pause-238319: state=Running err=<nil>
	W1018 09:41:25.896713  336575 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:41:24.027073  332699 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-650496
	
	I1018 09:41:24.027093  332699 ubuntu.go:182] provisioning hostname "cert-expiration-650496"
	I1018 09:41:24.027186  332699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:41:24.045073  332699 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:24.045272  332699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1018 09:41:24.045279  332699 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-650496 && echo "cert-expiration-650496" | sudo tee /etc/hostname
	I1018 09:41:24.241542  332699 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-650496
	
	I1018 09:41:24.241655  332699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:41:24.262040  332699 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:24.262241  332699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1018 09:41:24.262252  332699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-650496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-650496/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-650496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:41:24.398522  332699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:41:24.398552  332699 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:41:24.398574  332699 ubuntu.go:190] setting up certificates
	I1018 09:41:24.398594  332699 provision.go:84] configureAuth start
	I1018 09:41:24.398657  332699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-650496
	I1018 09:41:24.418619  332699 provision.go:143] copyHostCerts
	I1018 09:41:24.418679  332699 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:41:24.418688  332699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:41:24.418762  332699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:41:24.418922  332699 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:41:24.418929  332699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:41:24.418970  332699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:41:24.419063  332699 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:41:24.419069  332699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:41:24.419116  332699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:41:24.419191  332699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-650496 san=[127.0.0.1 192.168.103.2 cert-expiration-650496 localhost minikube]
	I1018 09:41:24.559927  332699 provision.go:177] copyRemoteCerts
	I1018 09:41:24.559988  332699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:41:24.560023  332699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:41:24.580908  332699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/cert-expiration-650496/id_rsa Username:docker}
	I1018 09:41:24.677388  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:41:24.972623  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:41:24.990481  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 09:41:25.008127  332699 provision.go:87] duration metric: took 609.51711ms to configureAuth
	I1018 09:41:25.008150  332699 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:41:25.008324  332699 config.go:182] Loaded profile config "cert-expiration-650496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:41:25.008411  332699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:41:25.025506  332699 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:25.025718  332699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1018 09:41:25.025729  332699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:41:25.746005  332699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:41:25.746020  332699 machine.go:96] duration metric: took 4.878467225s to provisionDockerMachine
	I1018 09:41:25.746033  332699 client.go:171] duration metric: took 13.306226966s to LocalClient.Create
	I1018 09:41:25.746050  332699 start.go:167] duration metric: took 13.306288297s to libmachine.API.Create "cert-expiration-650496"
	I1018 09:41:25.746056  332699 start.go:293] postStartSetup for "cert-expiration-650496" (driver="docker")
	I1018 09:41:25.746064  332699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:41:25.746114  332699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:41:25.746145  332699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:41:25.763700  332699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/cert-expiration-650496/id_rsa Username:docker}
	I1018 09:41:25.880574  332699 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:41:25.884784  332699 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:41:25.884807  332699 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:41:25.884817  332699 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:41:25.884882  332699 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:41:25.884973  332699 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:41:25.885088  332699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:41:25.894316  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:41:25.918934  332699 start.go:296] duration metric: took 172.865255ms for postStartSetup
	I1018 09:41:25.919340  332699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-650496
	I1018 09:41:25.946852  332699 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/config.json ...
	I1018 09:41:25.947170  332699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:41:25.947218  332699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:41:25.969944  332699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/cert-expiration-650496/id_rsa Username:docker}
	I1018 09:41:26.078482  332699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:41:26.085016  332699 start.go:128] duration metric: took 13.648693998s to createHost
	I1018 09:41:26.085037  332699 start.go:83] releasing machines lock for "cert-expiration-650496", held for 13.648836289s
	I1018 09:41:26.085112  332699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-650496
	I1018 09:41:26.114457  332699 ssh_runner.go:195] Run: cat /version.json
	I1018 09:41:26.114510  332699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:41:26.114939  332699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:41:26.115017  332699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:41:26.145064  332699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/cert-expiration-650496/id_rsa Username:docker}
	I1018 09:41:26.147469  332699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/cert-expiration-650496/id_rsa Username:docker}
	I1018 09:41:26.346256  332699 ssh_runner.go:195] Run: systemctl --version
	I1018 09:41:26.355486  332699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:41:26.407233  332699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:41:26.417400  332699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:41:26.417466  332699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:41:26.451557  332699 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:41:26.451574  332699 start.go:495] detecting cgroup driver to use...
	I1018 09:41:26.451645  332699 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:41:26.451790  332699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:41:26.475528  332699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:41:26.495396  332699 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:41:26.495445  332699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:41:26.520655  332699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:41:26.542990  332699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:41:26.697286  332699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:41:26.863314  332699 docker.go:234] disabling docker service ...
	I1018 09:41:26.863385  332699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:41:26.893807  332699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:41:26.915405  332699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:41:27.058442  332699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:41:27.185781  332699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:41:27.201519  332699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:41:27.220479  332699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:41:27.220543  332699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:27.237193  332699 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:41:27.237244  332699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:27.249155  332699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:27.262016  332699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:27.273886  332699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:41:27.285771  332699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:27.297976  332699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:27.315361  332699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:27.326115  332699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:41:27.336149  332699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:41:27.346189  332699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:41:27.454768  332699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:41:27.568174  332699 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:41:27.568229  332699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:41:27.573567  332699 start.go:563] Will wait 60s for crictl version
	I1018 09:41:27.573613  332699 ssh_runner.go:195] Run: which crictl
	I1018 09:41:27.578332  332699 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:41:27.606406  332699 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:41:27.606476  332699 ssh_runner.go:195] Run: crio --version
	I1018 09:41:27.636884  332699 ssh_runner.go:195] Run: crio --version
	I1018 09:41:27.674068  332699 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:41:25.898750  336575 out.go:252] * Updating the running docker "pause-238319" container ...
	I1018 09:41:25.898790  336575 machine.go:93] provisionDockerMachine start ...
	I1018 09:41:25.898910  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:25.920660  336575 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:25.921014  336575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I1018 09:41:25.921041  336575 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:41:26.079806  336575 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-238319
	
	I1018 09:41:26.079854  336575 ubuntu.go:182] provisioning hostname "pause-238319"
	I1018 09:41:26.079910  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:26.105645  336575 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:26.106231  336575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I1018 09:41:26.106262  336575 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-238319 && echo "pause-238319" | sudo tee /etc/hostname
	I1018 09:41:26.286316  336575 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-238319
	
	I1018 09:41:26.286403  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:26.309023  336575 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:26.309356  336575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I1018 09:41:26.309388  336575 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-238319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-238319/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-238319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:41:26.463175  336575 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:41:26.463208  336575 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:41:26.463232  336575 ubuntu.go:190] setting up certificates
	I1018 09:41:26.463243  336575 provision.go:84] configureAuth start
	I1018 09:41:26.463303  336575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-238319
	I1018 09:41:26.487898  336575 provision.go:143] copyHostCerts
	I1018 09:41:26.487983  336575 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:41:26.488004  336575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:41:26.488088  336575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:41:26.488929  336575 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:41:26.488944  336575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:41:26.488995  336575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:41:26.489109  336575 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:41:26.489116  336575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:41:26.489151  336575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:41:26.489242  336575 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.pause-238319 san=[127.0.0.1 192.168.76.2 localhost minikube pause-238319]
	I1018 09:41:26.775177  336575 provision.go:177] copyRemoteCerts
	I1018 09:41:26.775309  336575 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:41:26.775364  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:26.811423  336575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/pause-238319/id_rsa Username:docker}
	I1018 09:41:26.939068  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:41:26.970564  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1018 09:41:27.002140  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:41:27.025634  336575 provision.go:87] duration metric: took 562.375747ms to configureAuth
	I1018 09:41:27.025767  336575 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:41:27.026093  336575 config.go:182] Loaded profile config "pause-238319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:41:27.026234  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:27.052747  336575 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:27.053138  336575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I1018 09:41:27.053168  336575 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:41:27.422411  336575 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:41:27.422444  336575 machine.go:96] duration metric: took 1.523644304s to provisionDockerMachine
	I1018 09:41:27.422458  336575 start.go:293] postStartSetup for "pause-238319" (driver="docker")
	I1018 09:41:27.422472  336575 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:41:27.422559  336575 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:41:27.422607  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:27.443115  336575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/pause-238319/id_rsa Username:docker}
	I1018 09:41:27.546048  336575 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:41:27.550484  336575 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:41:27.550517  336575 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:41:27.550529  336575 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:41:27.550595  336575 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:41:27.550698  336575 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:41:27.550911  336575 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:41:27.561481  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:41:27.583852  336575 start.go:296] duration metric: took 161.37521ms for postStartSetup
	I1018 09:41:27.583931  336575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:41:27.583985  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:27.605872  336575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/pause-238319/id_rsa Username:docker}
	I1018 09:41:27.706053  336575 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:41:27.711912  336575 fix.go:56] duration metric: took 1.834000888s for fixHost
	I1018 09:41:27.711953  336575 start.go:83] releasing machines lock for "pause-238319", held for 1.834053861s
	I1018 09:41:27.712015  336575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-238319
	I1018 09:41:27.732355  336575 ssh_runner.go:195] Run: cat /version.json
	I1018 09:41:27.732414  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:27.732437  336575 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:41:27.732516  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:27.754065  336575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/pause-238319/id_rsa Username:docker}
	I1018 09:41:27.754645  336575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/pause-238319/id_rsa Username:docker}
	I1018 09:41:27.930889  336575 ssh_runner.go:195] Run: systemctl --version
	I1018 09:41:27.941675  336575 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:41:28.003105  336575 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:41:28.008601  336575 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:41:28.008677  336575 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:41:28.017411  336575 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:41:28.017435  336575 start.go:495] detecting cgroup driver to use...
	I1018 09:41:28.017466  336575 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:41:28.017507  336575 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:41:28.033788  336575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:41:28.049538  336575 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:41:28.049606  336575 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:41:28.086143  336575 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:41:28.108551  336575 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:41:28.255769  336575 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:41:28.381755  336575 docker.go:234] disabling docker service ...
	I1018 09:41:28.381854  336575 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:41:28.399715  336575 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:41:28.414175  336575 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:41:28.561962  336575 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:41:28.682695  336575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:41:28.695669  336575 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:41:28.711636  336575 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:41:28.711702  336575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:28.721965  336575 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:41:28.722033  336575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:28.731294  336575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:28.740583  336575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:28.749658  336575 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:41:28.757880  336575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:28.768182  336575 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:28.777385  336575 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:28.786720  336575 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:41:28.794740  336575 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:41:28.802630  336575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:41:28.952585  336575 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:41:29.122922  336575 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:41:29.123001  336575 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:41:29.128177  336575 start.go:563] Will wait 60s for crictl version
	I1018 09:41:29.128247  336575 ssh_runner.go:195] Run: which crictl
	I1018 09:41:29.132650  336575 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:41:29.164957  336575 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:41:29.165038  336575 ssh_runner.go:195] Run: crio --version
	I1018 09:41:29.200020  336575 ssh_runner.go:195] Run: crio --version
	I1018 09:41:29.242943  336575 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:41:29.244452  336575 cli_runner.go:164] Run: docker network inspect pause-238319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:41:29.265791  336575 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:41:29.270622  336575 kubeadm.go:883] updating cluster {Name:pause-238319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-238319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:41:29.270798  336575 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:41:29.270885  336575 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:41:25.906623  335228 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-565668:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.510057539s)
	I1018 09:41:25.906663  335228 kic.go:203] duration metric: took 4.510234729s to extract preloaded images to volume ...
	W1018 09:41:25.906758  335228 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 09:41:25.906811  335228 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 09:41:25.906887  335228 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:41:25.982498  335228 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-565668 --name force-systemd-flag-565668 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-565668 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-565668 --network force-systemd-flag-565668 --ip 192.168.85.2 --volume force-systemd-flag-565668:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:41:26.349427  335228 cli_runner.go:164] Run: docker container inspect force-systemd-flag-565668 --format={{.State.Running}}
	I1018 09:41:26.375989  335228 cli_runner.go:164] Run: docker container inspect force-systemd-flag-565668 --format={{.State.Status}}
	I1018 09:41:26.397381  335228 cli_runner.go:164] Run: docker exec force-systemd-flag-565668 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:41:26.455156  335228 oci.go:144] the created container "force-systemd-flag-565668" has a running status.
	I1018 09:41:26.455191  335228 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/force-systemd-flag-565668/id_rsa...
	I1018 09:41:26.861223  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/force-systemd-flag-565668/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1018 09:41:26.861333  335228 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-131066/.minikube/machines/force-systemd-flag-565668/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:41:26.896938  335228 cli_runner.go:164] Run: docker container inspect force-systemd-flag-565668 --format={{.State.Status}}
	I1018 09:41:26.924676  335228 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:41:26.924736  335228 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-565668 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:41:26.992300  335228 cli_runner.go:164] Run: docker container inspect force-systemd-flag-565668 --format={{.State.Status}}
	I1018 09:41:27.017072  335228 machine.go:93] provisionDockerMachine start ...
	I1018 09:41:27.017271  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:27.043441  335228 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:27.043787  335228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33154 <nil> <nil>}
	I1018 09:41:27.043802  335228 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:41:27.210715  335228 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-565668
	
	I1018 09:41:27.210749  335228 ubuntu.go:182] provisioning hostname "force-systemd-flag-565668"
	I1018 09:41:27.210813  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:27.235014  335228 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:27.235318  335228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33154 <nil> <nil>}
	I1018 09:41:27.235340  335228 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-565668 && echo "force-systemd-flag-565668" | sudo tee /etc/hostname
	I1018 09:41:27.404859  335228 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-565668
	
	I1018 09:41:27.404961  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:27.426277  335228 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:27.426555  335228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33154 <nil> <nil>}
	I1018 09:41:27.426575  335228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-565668' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-565668/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-565668' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:41:27.570020  335228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:41:27.570051  335228 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:41:27.570076  335228 ubuntu.go:190] setting up certificates
	I1018 09:41:27.570089  335228 provision.go:84] configureAuth start
	I1018 09:41:27.570148  335228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-565668
	I1018 09:41:27.590488  335228 provision.go:143] copyHostCerts
	I1018 09:41:27.590530  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:41:27.590571  335228 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:41:27.590583  335228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:41:27.590669  335228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:41:27.590787  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:41:27.590816  335228 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:41:27.590838  335228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:41:27.590882  335228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:41:27.590960  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:41:27.590985  335228 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:41:27.590991  335228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:41:27.591033  335228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:41:27.591108  335228 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-565668 san=[127.0.0.1 192.168.85.2 force-systemd-flag-565668 localhost minikube]
	I1018 09:41:28.126042  335228 provision.go:177] copyRemoteCerts
	I1018 09:41:28.126124  335228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:41:28.126173  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:28.148088  335228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/force-systemd-flag-565668/id_rsa Username:docker}
	I1018 09:41:28.258378  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 09:41:28.258450  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1018 09:41:28.278583  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 09:41:28.278643  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:41:28.301960  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 09:41:28.302030  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:41:28.321812  335228 provision.go:87] duration metric: took 751.709103ms to configureAuth
	I1018 09:41:28.321855  335228 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:41:28.322027  335228 config.go:182] Loaded profile config "force-systemd-flag-565668": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:41:28.322141  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:28.341102  335228 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:28.341445  335228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33154 <nil> <nil>}
	I1018 09:41:28.341473  335228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:41:28.584690  335228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:41:28.584717  335228 machine.go:96] duration metric: took 1.567540681s to provisionDockerMachine
	I1018 09:41:28.584728  335228 client.go:171] duration metric: took 7.804104974s to LocalClient.Create
	I1018 09:41:28.584748  335228 start.go:167] duration metric: took 7.804233999s to libmachine.API.Create "force-systemd-flag-565668"
	I1018 09:41:28.584757  335228 start.go:293] postStartSetup for "force-systemd-flag-565668" (driver="docker")
	I1018 09:41:28.584771  335228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:41:28.584867  335228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:41:28.584919  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:28.608731  335228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/force-systemd-flag-565668/id_rsa Username:docker}
	I1018 09:41:28.713530  335228 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:41:28.717633  335228 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:41:28.717672  335228 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:41:28.717686  335228 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:41:28.717739  335228 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:41:28.717873  335228 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:41:28.717890  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> /etc/ssl/certs/1346112.pem
	I1018 09:41:28.718013  335228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:41:28.726384  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:41:28.747428  335228 start.go:296] duration metric: took 162.656191ms for postStartSetup
	I1018 09:41:28.747790  335228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-565668
	I1018 09:41:28.766782  335228 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/config.json ...
	I1018 09:41:28.767072  335228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:41:28.767127  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:28.785750  335228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/force-systemd-flag-565668/id_rsa Username:docker}
	I1018 09:41:28.879940  335228 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:41:28.885794  335228 start.go:128] duration metric: took 8.108001274s to createHost
	I1018 09:41:28.885834  335228 start.go:83] releasing machines lock for "force-systemd-flag-565668", held for 8.108168072s
	I1018 09:41:28.885924  335228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-565668
	I1018 09:41:28.906081  335228 ssh_runner.go:195] Run: cat /version.json
	I1018 09:41:28.906119  335228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:41:28.906142  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:28.906179  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:28.928580  335228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/force-systemd-flag-565668/id_rsa Username:docker}
	I1018 09:41:28.928580  335228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/force-systemd-flag-565668/id_rsa Username:docker}
	I1018 09:41:29.088690  335228 ssh_runner.go:195] Run: systemctl --version
	I1018 09:41:29.096624  335228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:41:29.143293  335228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:41:29.148992  335228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:41:29.149068  335228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:41:29.181007  335228 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:41:29.181051  335228 start.go:495] detecting cgroup driver to use...
	I1018 09:41:29.181067  335228 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1018 09:41:29.181127  335228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:41:29.202497  335228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:41:29.217004  335228 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:41:29.217065  335228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:41:29.239158  335228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:41:29.262878  335228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:41:29.377249  335228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:41:29.502228  335228 docker.go:234] disabling docker service ...
	I1018 09:41:29.502289  335228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:41:29.530588  335228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:41:29.553036  335228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:41:29.661258  335228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:41:29.775732  335228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:41:29.789585  335228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:41:29.805788  335228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:41:29.805866  335228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:29.816039  335228 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:41:29.816101  335228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:29.825304  335228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:29.834762  335228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:29.844605  335228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:41:29.852985  335228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:29.861672  335228 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:29.875705  335228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:29.884621  335228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:41:29.892930  335228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:41:29.901654  335228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:41:29.990359  335228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:41:30.099585  335228 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:41:30.099717  335228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:41:30.104409  335228 start.go:563] Will wait 60s for crictl version
	I1018 09:41:30.104463  335228 ssh_runner.go:195] Run: which crictl
	I1018 09:41:30.109170  335228 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:41:30.136691  335228 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:41:30.136769  335228 ssh_runner.go:195] Run: crio --version
	I1018 09:41:30.171475  335228 ssh_runner.go:195] Run: crio --version
	I1018 09:41:30.210563  335228 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:41:29.317255  336575 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:41:29.317284  336575 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:41:29.317340  336575 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:41:29.349766  336575 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:41:29.349795  336575 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:41:29.349813  336575 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 09:41:29.350004  336575 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-238319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-238319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:41:29.350107  336575 ssh_runner.go:195] Run: crio config
	I1018 09:41:29.417213  336575 cni.go:84] Creating CNI manager for ""
	I1018 09:41:29.417244  336575 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:41:29.417277  336575 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:41:29.417312  336575 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-238319 NodeName:pause-238319 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:41:29.417474  336575 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-238319"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:41:29.417539  336575 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:41:29.432487  336575 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:41:29.432576  336575 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:41:29.442868  336575 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1018 09:41:29.459698  336575 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:41:29.476036  336575 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1018 09:41:29.492273  336575 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:41:29.497416  336575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:41:29.656704  336575 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:41:29.672647  336575 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319 for IP: 192.168.76.2
	I1018 09:41:29.672681  336575 certs.go:195] generating shared ca certs ...
	I1018 09:41:29.672704  336575 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:29.673052  336575 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:41:29.673231  336575 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:41:29.673255  336575 certs.go:257] generating profile certs ...
	I1018 09:41:29.673388  336575 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/client.key
	I1018 09:41:29.673465  336575 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/apiserver.key.eeadefb0
	I1018 09:41:29.673531  336575 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/proxy-client.key
	I1018 09:41:29.673684  336575 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:41:29.673881  336575 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:41:29.673898  336575 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:41:29.673935  336575 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:41:29.674012  336575 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:41:29.674059  336575 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:41:29.674122  336575 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:41:29.675008  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:41:29.698315  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:41:29.725733  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:41:29.745389  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:41:29.765486  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 09:41:29.783664  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:41:29.802929  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:41:29.821767  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:41:29.840667  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:41:29.858848  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:41:29.876721  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:41:29.896015  336575 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:41:29.911439  336575 ssh_runner.go:195] Run: openssl version
	I1018 09:41:29.918864  336575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:41:29.927544  336575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:41:29.931361  336575 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:41:29.931421  336575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:41:29.972177  336575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:41:29.980642  336575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:41:29.989184  336575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:41:29.993396  336575 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:41:29.993455  336575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:41:30.032252  336575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:41:30.041627  336575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:41:30.050971  336575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:30.054865  336575 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:30.054924  336575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:30.096459  336575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:41:30.106079  336575 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:41:30.110776  336575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:41:30.158072  336575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:41:30.208615  336575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:41:30.250955  336575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:41:30.290793  336575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:41:30.333815  336575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:41:30.372753  336575 kubeadm.go:400] StartCluster: {Name:pause-238319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-238319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:41:30.372904  336575 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:41:30.372975  336575 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:41:30.408734  336575 cri.go:89] found id: "cb50c5561b8a28159de66c6e421c73e788439a08f5d404a6e136e4b904504795"
	I1018 09:41:30.408759  336575 cri.go:89] found id: "fc0bb0d4fc4e65de13e98f313b9612efea7c1345f5805feb1a4a92b098766fe9"
	I1018 09:41:30.408764  336575 cri.go:89] found id: "a019d95fa3490ac09c3720d197c684b3dd34f3ddd19c0eed1e63fe48cccfb2df"
	I1018 09:41:30.408777  336575 cri.go:89] found id: "8074b8d8db125c1763e6634b51d91476f5f4341b4314ab0d19ee17e915081239"
	I1018 09:41:30.408782  336575 cri.go:89] found id: "ab8c2763e1457e03e4fe2887dc9efa7d087b973ed3f586c84497a6659af10703"
	I1018 09:41:30.408788  336575 cri.go:89] found id: "45e57534f6b2ee505cd92bbf9caca937d90fab1e893258907dd9b9d5c454863e"
	I1018 09:41:30.408793  336575 cri.go:89] found id: "be1d9fd168ccbc88a1ae4984ccaa690a5a284e7713511e3376e39c36c372f903"
	I1018 09:41:30.408797  336575 cri.go:89] found id: ""
	I1018 09:41:30.408855  336575 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:41:30.424218  336575 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:41:30Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:41:30.424304  336575 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:41:30.433618  336575 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:41:30.433641  336575 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:41:30.433696  336575 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:41:30.442013  336575 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:41:30.442485  336575 kubeconfig.go:125] found "pause-238319" server: "https://192.168.76.2:8443"
	I1018 09:41:30.443106  336575 kapi.go:59] client config for pause-238319: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/client.crt", KeyFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/client.key", CAFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:41:30.443527  336575 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 09:41:30.443543  336575 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 09:41:30.443548  336575 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 09:41:30.443551  336575 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 09:41:30.443556  336575 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 09:41:30.443941  336575 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:41:30.452328  336575 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 09:41:30.452357  336575 kubeadm.go:601] duration metric: took 18.709515ms to restartPrimaryControlPlane
	I1018 09:41:30.452367  336575 kubeadm.go:402] duration metric: took 79.62592ms to StartCluster
	I1018 09:41:30.452385  336575 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:30.452450  336575 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:41:30.453231  336575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:30.453468  336575 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:41:30.453545  336575 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:41:30.453783  336575 config.go:182] Loaded profile config "pause-238319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:41:30.455709  336575 out.go:179] * Verifying Kubernetes components...
	I1018 09:41:30.455712  336575 out.go:179] * Enabled addons: 
	I1018 09:41:30.211670  335228 cli_runner.go:164] Run: docker network inspect force-systemd-flag-565668 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:41:30.230651  335228 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 09:41:30.235062  335228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:41:30.246355  335228 kubeadm.go:883] updating cluster {Name:force-systemd-flag-565668 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-565668 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:41:30.246502  335228 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:41:30.246567  335228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:41:30.283650  335228 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:41:30.283669  335228 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:41:30.283713  335228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:41:30.312418  335228 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:41:30.312443  335228 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:41:30.312453  335228 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 09:41:30.312562  335228 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-565668 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-565668 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:41:30.312641  335228 ssh_runner.go:195] Run: crio config
	I1018 09:41:30.369584  335228 cni.go:84] Creating CNI manager for ""
	I1018 09:41:30.369611  335228 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:41:30.369633  335228 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:41:30.369665  335228 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-565668 NodeName:force-systemd-flag-565668 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:41:30.369781  335228 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-565668"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:41:30.369867  335228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:41:30.378706  335228 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:41:30.378769  335228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:41:30.387373  335228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1018 09:41:30.401502  335228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:41:30.420053  335228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1018 09:41:30.435752  335228 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:41:30.439482  335228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:41:30.450328  335228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:41:30.548764  335228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:41:31.356932  331569 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1018 09:41:31.357006  331569 kubeadm.go:322] [preflight] Running pre-flight checks
	I1018 09:41:31.357142  331569 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:41:31.357220  331569 kubeadm.go:322] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:41:31.357256  331569 kubeadm.go:322] OS: Linux
	I1018 09:41:31.357292  331569 kubeadm.go:322] CGROUPS_CPU: enabled
	I1018 09:41:31.357332  331569 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1018 09:41:31.357369  331569 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1018 09:41:31.357407  331569 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1018 09:41:31.357443  331569 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1018 09:41:31.357526  331569 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1018 09:41:31.357594  331569 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1018 09:41:31.357656  331569 kubeadm.go:322] CGROUPS_IO: enabled
	I1018 09:41:31.357759  331569 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:41:31.357909  331569 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:41:31.358034  331569 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1018 09:41:31.358130  331569 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:41:31.359453  331569 out.go:204]   - Generating certificates and keys ...
	I1018 09:41:31.359564  331569 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1018 09:41:31.359648  331569 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1018 09:41:31.359737  331569 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:41:31.359815  331569 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:41:31.359911  331569 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:41:31.359967  331569 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1018 09:41:31.360019  331569 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1018 09:41:31.360179  331569 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost missing-upgrade-631894] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 09:41:31.360224  331569 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1018 09:41:31.360390  331569 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost missing-upgrade-631894] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 09:41:31.360468  331569 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:41:31.360529  331569 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:41:31.360591  331569 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1018 09:41:31.360672  331569 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:41:31.360742  331569 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:41:31.360861  331569 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:41:31.360942  331569 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:41:31.361035  331569 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:41:31.361104  331569 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:41:31.361155  331569 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:41:31.362950  331569 out.go:204]   - Booting up control plane ...
	I1018 09:41:31.363033  331569 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:41:31.363109  331569 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:41:31.363168  331569 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:41:31.363252  331569 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:41:31.363332  331569 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:41:31.363374  331569 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1018 09:41:31.363549  331569 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1018 09:41:31.363644  331569 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.502537 seconds
	I1018 09:41:31.363750  331569 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:41:31.363888  331569 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:41:31.363938  331569 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:41:31.364094  331569 kubeadm.go:322] [mark-control-plane] Marking the node missing-upgrade-631894 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:41:31.364146  331569 kubeadm.go:322] [bootstrap-token] Using token: ehousc.jzaxl23me8418t0u
	I1018 09:41:31.365239  331569 out.go:204]   - Configuring RBAC rules ...
	I1018 09:41:31.365368  331569 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:41:31.365437  331569 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:41:31.365599  331569 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:41:31.365773  331569 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:41:31.365952  331569 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:41:31.366061  331569 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:41:31.366191  331569 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:41:31.366232  331569 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1018 09:41:31.366297  331569 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1018 09:41:31.366302  331569 kubeadm.go:322] 
	I1018 09:41:31.366375  331569 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1018 09:41:31.366380  331569 kubeadm.go:322] 
	I1018 09:41:31.366471  331569 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1018 09:41:31.366475  331569 kubeadm.go:322] 
	I1018 09:41:31.366494  331569 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1018 09:41:31.366545  331569 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:41:31.366588  331569 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:41:31.366591  331569 kubeadm.go:322] 
	I1018 09:41:31.366633  331569 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1018 09:41:31.366636  331569 kubeadm.go:322] 
	I1018 09:41:31.366672  331569 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:41:31.366675  331569 kubeadm.go:322] 
	I1018 09:41:31.366715  331569 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1018 09:41:31.366778  331569 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:41:31.366883  331569 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:41:31.366892  331569 kubeadm.go:322] 
	I1018 09:41:31.367012  331569 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:41:31.367124  331569 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1018 09:41:31.367130  331569 kubeadm.go:322] 
	I1018 09:41:31.367246  331569 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ehousc.jzaxl23me8418t0u \
	I1018 09:41:31.367384  331569 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f \
	I1018 09:41:31.367401  331569 kubeadm.go:322] 	--control-plane 
	I1018 09:41:31.367404  331569 kubeadm.go:322] 
	I1018 09:41:31.367470  331569 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:41:31.367473  331569 kubeadm.go:322] 
	I1018 09:41:31.367540  331569 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ehousc.jzaxl23me8418t0u \
	I1018 09:41:31.367635  331569 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f 
	I1018 09:41:31.367667  331569 cni.go:84] Creating CNI manager for ""
	I1018 09:41:31.367673  331569 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:41:31.369639  331569 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1018 09:41:31.370722  331569 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:41:31.375570  331569 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1018 09:41:31.375581  331569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1018 09:41:31.394665  331569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:41:32.060742  331569 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:41:32.060814  331569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:41:32.060814  331569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=8220a6eb95f0a4d75f7f2d7b14cef975f050512d minikube.k8s.io/name=missing-upgrade-631894 minikube.k8s.io/updated_at=2025_10_18T09_41_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:41:32.069753  331569 ops.go:34] apiserver oom_adj: -16
	I1018 09:41:32.135847  331569 kubeadm.go:1081] duration metric: took 75.07922ms to wait for elevateKubeSystemPrivileges.
	I1018 09:41:32.154550  331569 kubeadm.go:406] StartCluster complete in 10.257621764s
	I1018 09:41:32.154589  331569 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:32.154676  331569 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:41:32.155928  331569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:32.156206  331569 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:41:32.156296  331569 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1018 09:41:32.156385  331569 addons.go:69] Setting storage-provisioner=true in profile "missing-upgrade-631894"
	I1018 09:41:32.156403  331569 addons.go:69] Setting default-storageclass=true in profile "missing-upgrade-631894"
	I1018 09:41:32.156404  331569 config.go:182] Loaded profile config "missing-upgrade-631894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1018 09:41:32.156411  331569 addons.go:231] Setting addon storage-provisioner=true in "missing-upgrade-631894"
	I1018 09:41:32.156419  331569 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "missing-upgrade-631894"
	I1018 09:41:32.156470  331569 host.go:66] Checking if "missing-upgrade-631894" exists ...
	I1018 09:41:32.156802  331569 cli_runner.go:164] Run: docker container inspect missing-upgrade-631894 --format={{.State.Status}}
	I1018 09:41:32.156978  331569 cli_runner.go:164] Run: docker container inspect missing-upgrade-631894 --format={{.State.Status}}
	I1018 09:41:32.182583  331569 addons.go:231] Setting addon default-storageclass=true in "missing-upgrade-631894"
	I1018 09:41:32.182631  331569 host.go:66] Checking if "missing-upgrade-631894" exists ...
	I1018 09:41:32.183140  331569 cli_runner.go:164] Run: docker container inspect missing-upgrade-631894 --format={{.State.Status}}
	I1018 09:41:32.185450  331569 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:41:32.186153  331569 kapi.go:248] "coredns" deployment in "kube-system" namespace and "missing-upgrade-631894" context rescaled to 1 replicas
	I1018 09:41:32.186573  331569 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:41:32.186593  331569 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:41:32.186604  331569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:41:32.187696  331569 out.go:177] * Verifying Kubernetes components...
	I1018 09:41:27.675224  332699 cli_runner.go:164] Run: docker network inspect cert-expiration-650496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:41:27.692406  332699 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 09:41:27.696648  332699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:41:27.707710  332699 kubeadm.go:883] updating cluster {Name:cert-expiration-650496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-650496 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:41:27.707905  332699 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:41:27.707966  332699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:41:27.749014  332699 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:41:27.749030  332699 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:41:27.749090  332699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:41:27.787414  332699 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:41:27.787430  332699 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:41:27.787438  332699 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 09:41:27.787564  332699 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-650496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-650496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:41:27.787653  332699 ssh_runner.go:195] Run: crio config
	I1018 09:41:27.841977  332699 cni.go:84] Creating CNI manager for ""
	I1018 09:41:27.841996  332699 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:41:27.842016  332699 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:41:27.842043  332699 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-650496 NodeName:cert-expiration-650496 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:41:27.842193  332699 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-650496"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:41:27.842258  332699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:41:27.850927  332699 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:41:27.850990  332699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:41:27.859940  332699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1018 09:41:27.881585  332699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:41:27.898839  332699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1018 09:41:27.930485  332699 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:41:27.938532  332699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:41:27.962305  332699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:41:28.062529  332699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:41:28.108337  332699 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496 for IP: 192.168.103.2
	I1018 09:41:28.108350  332699 certs.go:195] generating shared ca certs ...
	I1018 09:41:28.108370  332699 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:28.108525  332699 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:41:28.108576  332699 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:41:28.108585  332699 certs.go:257] generating profile certs ...
	I1018 09:41:28.108644  332699 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/client.key
	I1018 09:41:28.108662  332699 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/client.crt with IP's: []
	I1018 09:41:28.436441  332699 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/client.crt ...
	I1018 09:41:28.436459  332699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/client.crt: {Name:mka41d5a8c5180ef43755c2753eca367d5b30da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:28.436651  332699 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/client.key ...
	I1018 09:41:28.436663  332699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/client.key: {Name:mk5973fa0ec4d3fc5dd5b89c40340b74358b4b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:28.436776  332699 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.key.300147a9
	I1018 09:41:28.436790  332699 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.crt.300147a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1018 09:41:28.631348  332699 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.crt.300147a9 ...
	I1018 09:41:28.631365  332699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.crt.300147a9: {Name:mkf5d9fd0696a98c125f4850eb0e8369a5f0bc2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:28.631508  332699 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.key.300147a9 ...
	I1018 09:41:28.631534  332699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.key.300147a9: {Name:mka8eddd6f3f67c7a1eb0ed33729d4354a53fdf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:28.631604  332699 certs.go:382] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.crt.300147a9 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.crt
	I1018 09:41:28.631692  332699 certs.go:386] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.key.300147a9 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.key
	I1018 09:41:28.631746  332699 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/proxy-client.key
	I1018 09:41:28.631757  332699 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/proxy-client.crt with IP's: []
	I1018 09:41:28.936317  332699 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/proxy-client.crt ...
	I1018 09:41:28.936341  332699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/proxy-client.crt: {Name:mkecf2edf6790a0618b4e0abcc90392c08484139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:28.936548  332699 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/proxy-client.key ...
	I1018 09:41:28.936561  332699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/proxy-client.key: {Name:mkdc7347ef5384c82bf439f5d935082cebfec1c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:28.936838  332699 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:41:28.936885  332699 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:41:28.936895  332699 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:41:28.936926  332699 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:41:28.936955  332699 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:41:28.936983  332699 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:41:28.937039  332699 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:41:28.938067  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:41:28.959093  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:41:28.977705  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:41:28.995555  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:41:29.014164  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 09:41:29.032453  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:41:29.052576  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:41:29.074895  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:41:29.095200  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:41:29.118617  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:41:29.142088  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:41:29.164730  332699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:41:29.181757  332699 ssh_runner.go:195] Run: openssl version
	I1018 09:41:29.189654  332699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:41:29.200357  332699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:29.205525  332699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:29.205578  332699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:29.261100  332699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:41:29.272722  332699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:41:29.282539  332699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:41:29.287407  332699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:41:29.287453  332699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:41:29.343876  332699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:41:29.355205  332699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:41:29.366180  332699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:41:29.371333  332699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:41:29.371389  332699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:41:29.421588  332699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:41:29.437018  332699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:41:29.441558  332699 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:41:29.441621  332699 kubeadm.go:400] StartCluster: {Name:cert-expiration-650496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-650496 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:41:29.441697  332699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:41:29.441752  332699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:41:29.474904  332699 cri.go:89] found id: ""
	I1018 09:41:29.474969  332699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:41:29.485970  332699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:41:29.496281  332699 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:41:29.496332  332699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:41:29.506298  332699 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:41:29.506308  332699 kubeadm.go:157] found existing configuration files:
	
	I1018 09:41:29.506357  332699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:41:29.515459  332699 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:41:29.515512  332699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:41:29.524506  332699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:41:29.538939  332699 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:41:29.538986  332699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:41:29.549531  332699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:41:29.560229  332699 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:41:29.560278  332699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:41:29.569434  332699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:41:29.579284  332699 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:41:29.579329  332699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:41:29.589413  332699 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:41:29.674367  332699 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:41:29.750264  332699 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:41:30.456762  336575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:41:30.456758  336575 addons.go:514] duration metric: took 3.22007ms for enable addons: enabled=[]
	I1018 09:41:30.579342  336575 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:41:30.594226  336575 node_ready.go:35] waiting up to 6m0s for node "pause-238319" to be "Ready" ...
	I1018 09:41:30.603306  336575 node_ready.go:49] node "pause-238319" is "Ready"
	I1018 09:41:30.603336  336575 node_ready.go:38] duration metric: took 9.062201ms for node "pause-238319" to be "Ready" ...
	I1018 09:41:30.603351  336575 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:41:30.603400  336575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:41:30.619528  336575 api_server.go:72] duration metric: took 166.027505ms to wait for apiserver process to appear ...
	I1018 09:41:30.619556  336575 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:41:30.619589  336575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:41:30.625605  336575 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:41:30.626611  336575 api_server.go:141] control plane version: v1.34.1
	I1018 09:41:30.626648  336575 api_server.go:131] duration metric: took 7.072795ms to wait for apiserver health ...
	I1018 09:41:30.626660  336575 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:41:30.629758  336575 system_pods.go:59] 7 kube-system pods found
	I1018 09:41:30.629799  336575 system_pods.go:61] "coredns-66bc5c9577-lqmd8" [6dc5a3a9-be4f-4bf3-8cf8-a516bd3e4867] Running
	I1018 09:41:30.629810  336575 system_pods.go:61] "etcd-pause-238319" [efb9eb2e-4b92-4587-817e-27213d4814e7] Running
	I1018 09:41:30.629843  336575 system_pods.go:61] "kindnet-w8lp5" [3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed] Running
	I1018 09:41:30.629851  336575 system_pods.go:61] "kube-apiserver-pause-238319" [3635fda7-eccb-4928-a6a8-c8ccef65afff] Running
	I1018 09:41:30.629857  336575 system_pods.go:61] "kube-controller-manager-pause-238319" [ad3b8090-cb83-44cd-bb61-48729d3ad835] Running
	I1018 09:41:30.629867  336575 system_pods.go:61] "kube-proxy-769dd" [3b6484de-71d8-4a6c-93ba-2ae0eb18308b] Running
	I1018 09:41:30.629872  336575 system_pods.go:61] "kube-scheduler-pause-238319" [95cf08f1-1435-462f-b949-ad6a907e32c8] Running
	I1018 09:41:30.629882  336575 system_pods.go:74] duration metric: took 3.205101ms to wait for pod list to return data ...
	I1018 09:41:30.629897  336575 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:41:30.631716  336575 default_sa.go:45] found service account: "default"
	I1018 09:41:30.631736  336575 default_sa.go:55] duration metric: took 1.831893ms for default service account to be created ...
	I1018 09:41:30.631746  336575 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:41:30.634070  336575 system_pods.go:86] 7 kube-system pods found
	I1018 09:41:30.634091  336575 system_pods.go:89] "coredns-66bc5c9577-lqmd8" [6dc5a3a9-be4f-4bf3-8cf8-a516bd3e4867] Running
	I1018 09:41:30.634096  336575 system_pods.go:89] "etcd-pause-238319" [efb9eb2e-4b92-4587-817e-27213d4814e7] Running
	I1018 09:41:30.634099  336575 system_pods.go:89] "kindnet-w8lp5" [3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed] Running
	I1018 09:41:30.634103  336575 system_pods.go:89] "kube-apiserver-pause-238319" [3635fda7-eccb-4928-a6a8-c8ccef65afff] Running
	I1018 09:41:30.634106  336575 system_pods.go:89] "kube-controller-manager-pause-238319" [ad3b8090-cb83-44cd-bb61-48729d3ad835] Running
	I1018 09:41:30.634109  336575 system_pods.go:89] "kube-proxy-769dd" [3b6484de-71d8-4a6c-93ba-2ae0eb18308b] Running
	I1018 09:41:30.634112  336575 system_pods.go:89] "kube-scheduler-pause-238319" [95cf08f1-1435-462f-b949-ad6a907e32c8] Running
	I1018 09:41:30.634118  336575 system_pods.go:126] duration metric: took 2.366264ms to wait for k8s-apps to be running ...
	I1018 09:41:30.634126  336575 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:41:30.634166  336575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:41:30.648104  336575 system_svc.go:56] duration metric: took 13.966014ms WaitForService to wait for kubelet
	I1018 09:41:30.648139  336575 kubeadm.go:586] duration metric: took 194.642708ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:41:30.648162  336575 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:41:30.651032  336575 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:41:30.651062  336575 node_conditions.go:123] node cpu capacity is 8
	I1018 09:41:30.651075  336575 node_conditions.go:105] duration metric: took 2.906823ms to run NodePressure ...
	I1018 09:41:30.651087  336575 start.go:241] waiting for startup goroutines ...
	I1018 09:41:30.651096  336575 start.go:246] waiting for cluster config update ...
	I1018 09:41:30.651105  336575 start.go:255] writing updated cluster config ...
	I1018 09:41:30.651445  336575 ssh_runner.go:195] Run: rm -f paused
	I1018 09:41:30.656249  336575 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:41:30.656914  336575 kapi.go:59] client config for pause-238319: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/client.crt", KeyFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/client.key", CAFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:41:30.659937  336575 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lqmd8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:30.665210  336575 pod_ready.go:94] pod "coredns-66bc5c9577-lqmd8" is "Ready"
	I1018 09:41:30.665236  336575 pod_ready.go:86] duration metric: took 5.276598ms for pod "coredns-66bc5c9577-lqmd8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:30.667358  336575 pod_ready.go:83] waiting for pod "etcd-pause-238319" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:30.671487  336575 pod_ready.go:94] pod "etcd-pause-238319" is "Ready"
	I1018 09:41:30.671513  336575 pod_ready.go:86] duration metric: took 4.13365ms for pod "etcd-pause-238319" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:30.673613  336575 pod_ready.go:83] waiting for pod "kube-apiserver-pause-238319" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:30.678250  336575 pod_ready.go:94] pod "kube-apiserver-pause-238319" is "Ready"
	I1018 09:41:30.678273  336575 pod_ready.go:86] duration metric: took 4.638952ms for pod "kube-apiserver-pause-238319" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:30.680468  336575 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-238319" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:31.060847  336575 pod_ready.go:94] pod "kube-controller-manager-pause-238319" is "Ready"
	I1018 09:41:31.060886  336575 pod_ready.go:86] duration metric: took 380.398443ms for pod "kube-controller-manager-pause-238319" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:31.260729  336575 pod_ready.go:83] waiting for pod "kube-proxy-769dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:31.661165  336575 pod_ready.go:94] pod "kube-proxy-769dd" is "Ready"
	I1018 09:41:31.661192  336575 pod_ready.go:86] duration metric: took 400.441287ms for pod "kube-proxy-769dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:31.860218  336575 pod_ready.go:83] waiting for pod "kube-scheduler-pause-238319" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:32.264198  336575 pod_ready.go:94] pod "kube-scheduler-pause-238319" is "Ready"
	I1018 09:41:32.264229  336575 pod_ready.go:86] duration metric: took 403.983061ms for pod "kube-scheduler-pause-238319" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:32.264242  336575 pod_ready.go:40] duration metric: took 1.607951424s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:41:32.336115  336575 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:41:32.338754  336575 out.go:179] * Done! kubectl is now configured to use "pause-238319" cluster and "default" namespace by default
	I1018 09:41:32.186688  331569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631894
	I1018 09:41:32.188968  331569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:41:32.209606  331569 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:41:32.209618  331569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:41:32.209664  331569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631894
	I1018 09:41:32.212846  331569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/missing-upgrade-631894/id_rsa Username:docker}
	I1018 09:41:32.230150  331569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/missing-upgrade-631894/id_rsa Username:docker}
	I1018 09:41:32.250207  331569 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:41:32.251400  331569 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:41:32.251448  331569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:41:32.334081  331569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:41:32.347880  331569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:41:32.572864  331569 start.go:926] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1018 09:41:32.572947  331569 api_server.go:72] duration metric: took 386.341227ms to wait for apiserver process to appear ...
	I1018 09:41:32.572965  331569 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:41:32.572983  331569 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 09:41:32.579561  331569 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1018 09:41:32.581093  331569 api_server.go:141] control plane version: v1.28.3
	I1018 09:41:32.581112  331569 api_server.go:131] duration metric: took 8.140467ms to wait for apiserver health ...
	I1018 09:41:32.581122  331569 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:41:32.590510  331569 system_pods.go:59] 4 kube-system pods found
	I1018 09:41:32.590546  331569 system_pods.go:61] "etcd-missing-upgrade-631894" [e53c74aa-9bce-412b-aac0-8da7140f834d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:41:32.590557  331569 system_pods.go:61] "kube-apiserver-missing-upgrade-631894" [8f3c1359-aff2-40e3-98b9-d07436bc79ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:41:32.590568  331569 system_pods.go:61] "kube-controller-manager-missing-upgrade-631894" [ea8ae884-a7f3-4663-a05b-e7171359d550] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:41:32.590578  331569 system_pods.go:61] "kube-scheduler-missing-upgrade-631894" [e52bac3d-821f-4972-a84c-f9b558213a4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:41:32.590585  331569 system_pods.go:74] duration metric: took 9.457267ms to wait for pod list to return data ...
	I1018 09:41:32.590597  331569 kubeadm.go:581] duration metric: took 403.995233ms to wait for : map[apiserver:true system_pods:true] ...
	I1018 09:41:32.590611  331569 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:41:32.598794  331569 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:41:32.598807  331569 node_conditions.go:123] node cpu capacity is 8
	I1018 09:41:32.598819  331569 node_conditions.go:105] duration metric: took 8.203823ms to run NodePressure ...
	I1018 09:41:32.598844  331569 start.go:228] waiting for startup goroutines ...
	I1018 09:41:32.805981  331569 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:41:30.571575  335228 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668 for IP: 192.168.85.2
	I1018 09:41:30.571599  335228 certs.go:195] generating shared ca certs ...
	I1018 09:41:30.571621  335228 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:30.571788  335228 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:41:30.571879  335228 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:41:30.571897  335228 certs.go:257] generating profile certs ...
	I1018 09:41:30.571972  335228 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/client.key
	I1018 09:41:30.571996  335228 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/client.crt with IP's: []
	I1018 09:41:30.701009  335228 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/client.crt ...
	I1018 09:41:30.701041  335228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/client.crt: {Name:mk6bfb4f0817ac3fa3d50a7e4151da3d6430608a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:30.701262  335228 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/client.key ...
	I1018 09:41:30.701286  335228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/client.key: {Name:mke2f6da2972b68b9d2a4fb4b67a395a35c5409d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:30.701415  335228 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.key.4fbec894
	I1018 09:41:30.701440  335228 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.crt.4fbec894 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 09:41:31.423018  335228 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.crt.4fbec894 ...
	I1018 09:41:31.423055  335228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.crt.4fbec894: {Name:mkd4aaf21ba135ffa62b6eb85fc66b04757b0486 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:31.423268  335228 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.key.4fbec894 ...
	I1018 09:41:31.423291  335228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.key.4fbec894: {Name:mk9cbab455dd004db12d7ec9e2e45f615cbfb732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:31.423428  335228 certs.go:382] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.crt.4fbec894 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.crt
	I1018 09:41:31.423540  335228 certs.go:386] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.key.4fbec894 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.key
	I1018 09:41:31.423625  335228 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.key
	I1018 09:41:31.423651  335228 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.crt with IP's: []
	I1018 09:41:31.926675  335228 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.crt ...
	I1018 09:41:31.926704  335228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.crt: {Name:mkced8c1daff26edd02db359e819db628030f328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:31.926899  335228 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.key ...
	I1018 09:41:31.926920  335228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.key: {Name:mk7e8de9d64656c38a7bb0c2c877583b42a915c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:31.927044  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 09:41:31.927070  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 09:41:31.927085  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 09:41:31.927105  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 09:41:31.927126  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 09:41:31.927142  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 09:41:31.927160  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 09:41:31.927178  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 09:41:31.927240  335228 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:41:31.927285  335228 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:41:31.927299  335228 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:41:31.927332  335228 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:41:31.927363  335228 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:41:31.927395  335228 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:41:31.927451  335228 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:41:31.927491  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:31.927512  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem -> /usr/share/ca-certificates/134611.pem
	I1018 09:41:31.927527  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> /usr/share/ca-certificates/1346112.pem
	I1018 09:41:31.928166  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:41:31.950185  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:41:31.971569  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:41:31.989006  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:41:32.008136  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1018 09:41:32.027124  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:41:32.048603  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:41:32.069743  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:41:32.089326  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:41:32.109238  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:41:32.129714  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:41:32.152095  335228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:41:32.169655  335228 ssh_runner.go:195] Run: openssl version
	I1018 09:41:32.181601  335228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:41:32.195104  335228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:32.201152  335228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:32.201216  335228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:32.255069  335228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:41:32.271629  335228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:41:32.288946  335228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:41:32.296789  335228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:41:32.296970  335228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:41:32.355468  335228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:41:32.374321  335228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:41:32.387785  335228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:41:32.393003  335228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:41:32.393112  335228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:41:32.449061  335228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:41:32.463070  335228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:41:32.469240  335228 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:41:32.469304  335228 kubeadm.go:400] StartCluster: {Name:force-systemd-flag-565668 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-565668 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:41:32.469384  335228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:41:32.469445  335228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:41:32.521592  335228 cri.go:89] found id: ""
	I1018 09:41:32.521669  335228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:41:32.540879  335228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:41:32.557549  335228 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:41:32.557653  335228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:41:32.570169  335228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:41:32.570229  335228 kubeadm.go:157] found existing configuration files:
	
	I1018 09:41:32.570292  335228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:41:32.579524  335228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:41:32.579639  335228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:41:32.592894  335228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:41:32.603287  335228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:41:32.603347  335228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:41:32.613600  335228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:41:32.622710  335228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:41:32.622764  335228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:41:32.631706  335228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:41:32.640860  335228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:41:32.640927  335228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:41:32.650036  335228 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:41:32.693455  335228 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:41:32.693529  335228 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:41:32.718522  335228 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:41:32.718630  335228 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:41:32.718689  335228 kubeadm.go:318] OS: Linux
	I1018 09:41:32.718753  335228 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:41:32.718816  335228 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:41:32.718928  335228 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:41:32.719031  335228 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:41:32.719115  335228 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:41:32.719163  335228 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:41:32.719203  335228 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:41:32.719239  335228 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 09:41:32.808176  335228 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:41:32.808322  335228 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:41:32.808434  335228 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:41:32.819273  335228 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:41:32.807049  331569 addons.go:502] enable addons completed in 650.752442ms: enabled=[storage-provisioner default-storageclass]
	I1018 09:41:32.807084  331569 start.go:233] waiting for cluster config update ...
	I1018 09:41:32.807105  331569 start.go:242] writing updated cluster config ...
	I1018 09:41:32.807377  331569 ssh_runner.go:195] Run: rm -f paused
	I1018 09:41:32.864495  331569 start.go:600] kubectl: 1.34.1, cluster: 1.28.3 (minor skew: 6)
	I1018 09:41:32.865898  331569 out.go:177] 
	W1018 09:41:32.867343  331569 out.go:239] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.3.
	I1018 09:41:32.868759  331569 out.go:177]   - Want kubectl v1.28.3? Try 'minikube kubectl -- get pods -A'
	I1018 09:41:32.870376  331569 out.go:177] * Done! kubectl is now configured to use "missing-upgrade-631894" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.052597133Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.053518702Z" level=info msg="Conmon does support the --sync option"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.053535533Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.05354967Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.054469947Z" level=info msg="Conmon does support the --sync option"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.05449043Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.059162289Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.059187671Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.059914425Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.060328783Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.060368533Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.066912565Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.114194339Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-lqmd8 Namespace:kube-system ID:3d6907308702e966e0f74bce0fdf6191620f32d933b84ad08ed8b2357f29db60 UID:6dc5a3a9-be4f-4bf3-8cf8-a516bd3e4867 NetNS:/var/run/netns/2e6accb6-c824-4780-9538-6a9f11d29b7d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0000ca780}] Aliases:map[]}"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.1144422Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-lqmd8 for CNI network kindnet (type=ptp)"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115084921Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115123111Z" level=info msg="Starting seccomp notifier watcher"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115192175Z" level=info msg="Create NRI interface"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115402784Z" level=info msg="built-in NRI default validator is disabled"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115423512Z" level=info msg="runtime interface created"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115438026Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115445974Z" level=info msg="runtime interface starting up..."
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115453588Z" level=info msg="starting plugins..."
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115469015Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.116166011Z" level=info msg="No systemd watchdog enabled"
	Oct 18 09:41:29 pause-238319 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	cb50c5561b8a2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   0                   3d6907308702e       coredns-66bc5c9577-lqmd8               kube-system
	fc0bb0d4fc4e6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   25 seconds ago      Running             kindnet-cni               0                   44d7361e6f355       kindnet-w8lp5                          kube-system
	a019d95fa3490       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   25 seconds ago      Running             kube-proxy                0                   9d162f4b827b8       kube-proxy-769dd                       kube-system
	8074b8d8db125       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   35 seconds ago      Running             kube-apiserver            0                   ed2b2de72f9b7       kube-apiserver-pause-238319            kube-system
	ab8c2763e1457       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   35 seconds ago      Running             etcd                      0                   5e0f9fd1573dd       etcd-pause-238319                      kube-system
	45e57534f6b2e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   35 seconds ago      Running             kube-scheduler            0                   6d15ad88e83db       kube-scheduler-pause-238319            kube-system
	be1d9fd168ccb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   35 seconds ago      Running             kube-controller-manager   0                   95ab681df50c5       kube-controller-manager-pause-238319   kube-system
	
	
	==> coredns [cb50c5561b8a28159de66c6e421c73e788439a08f5d404a6e136e4b904504795] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44329 - 19230 "HINFO IN 2235917094381022108.6888995318603568073. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024023698s
	
	
	==> describe nodes <==
	Name:               pause-238319
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-238319
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=pause-238319
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_41_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:41:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-238319
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:41:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:41:21 +0000   Sat, 18 Oct 2025 09:41:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:41:21 +0000   Sat, 18 Oct 2025 09:41:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:41:21 +0000   Sat, 18 Oct 2025 09:41:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:41:21 +0000   Sat, 18 Oct 2025 09:41:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-238319
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                8fb9a7a0-8858-4074-8948-817d47122c80
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-lqmd8                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-238319                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-w8lp5                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-238319             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-238319    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-769dd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-238319             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node pause-238319 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node pause-238319 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node pause-238319 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node pause-238319 event: Registered Node pause-238319 in Controller
	  Normal  NodeReady                15s   kubelet          Node pause-238319 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [ab8c2763e1457e03e4fe2887dc9efa7d087b973ed3f586c84497a6659af10703] <==
	{"level":"warn","ts":"2025-10-18T09:41:01.270719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:41:01.285059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:41:01.290042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:41:01.305890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:41:01.364977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51230","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:41:10.295314Z","caller":"traceutil/trace.go:172","msg":"trace[461286718] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"157.16995ms","start":"2025-10-18T09:41:10.138122Z","end":"2025-10-18T09:41:10.295292Z","steps":["trace[461286718] 'process raft request'  (duration: 124.546043ms)","trace[461286718] 'compare'  (duration: 32.495696ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:41:10.295355Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.157904ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:693"}
	{"level":"info","ts":"2025-10-18T09:41:10.295436Z","caller":"traceutil/trace.go:172","msg":"trace[312135317] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:337; }","duration":"131.283172ms","start":"2025-10-18T09:41:10.164135Z","end":"2025-10-18T09:41:10.295418Z","steps":["trace[312135317] 'agreement among raft nodes before linearized reading'  (duration: 98.505306ms)","trace[312135317] 'range keys from in-memory index tree'  (duration: 32.524198ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:41:10.296491Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.862666ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2025-10-18T09:41:10.296544Z","caller":"traceutil/trace.go:172","msg":"trace[747627744] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:338; }","duration":"114.923378ms","start":"2025-10-18T09:41:10.181609Z","end":"2025-10-18T09:41:10.296533Z","steps":["trace[747627744] 'agreement among raft nodes before linearized reading'  (duration: 114.77565ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:41:10.296536Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.859947ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2025-10-18T09:41:10.296576Z","caller":"traceutil/trace.go:172","msg":"trace[1044351635] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:338; }","duration":"114.91131ms","start":"2025-10-18T09:41:10.181656Z","end":"2025-10-18T09:41:10.296568Z","steps":["trace[1044351635] 'agreement among raft nodes before linearized reading'  (duration: 114.793838ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:41:10.296812Z","caller":"traceutil/trace.go:172","msg":"trace[62056727] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"156.61289ms","start":"2025-10-18T09:41:10.140170Z","end":"2025-10-18T09:41:10.296783Z","steps":["trace[62056727] 'process raft request'  (duration: 156.247996ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:41:10.296810Z","caller":"traceutil/trace.go:172","msg":"trace[1498315554] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"152.019008ms","start":"2025-10-18T09:41:10.144781Z","end":"2025-10-18T09:41:10.296800Z","steps":["trace[1498315554] 'process raft request'  (duration: 151.980287ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:41:10.296859Z","caller":"traceutil/trace.go:172","msg":"trace[1338658320] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"156.490837ms","start":"2025-10-18T09:41:10.140363Z","end":"2025-10-18T09:41:10.296853Z","steps":["trace[1338658320] 'process raft request'  (duration: 156.358893ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:41:13.676368Z","caller":"traceutil/trace.go:172","msg":"trace[380464295] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"128.328728ms","start":"2025-10-18T09:41:13.548021Z","end":"2025-10-18T09:41:13.676349Z","steps":["trace[380464295] 'process raft request'  (duration: 124.269729ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:41:14.947911Z","caller":"traceutil/trace.go:172","msg":"trace[368124397] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"156.598008ms","start":"2025-10-18T09:41:14.791297Z","end":"2025-10-18T09:41:14.947895Z","steps":["trace[368124397] 'process raft request'  (duration: 156.397132ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:41:18.990224Z","caller":"traceutil/trace.go:172","msg":"trace[1176147796] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"164.789475ms","start":"2025-10-18T09:41:18.825412Z","end":"2025-10-18T09:41:18.990202Z","steps":["trace[1176147796] 'process raft request'  (duration: 164.610616ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:41:19.143206Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.431541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-238319\" limit:1 ","response":"range_response_count:1 size:5582"}
	{"level":"info","ts":"2025-10-18T09:41:19.143784Z","caller":"traceutil/trace.go:172","msg":"trace[1140840071] range","detail":"{range_begin:/registry/minions/pause-238319; range_end:; response_count:1; response_revision:379; }","duration":"108.021043ms","start":"2025-10-18T09:41:19.035722Z","end":"2025-10-18T09:41:19.143743Z","steps":["trace[1140840071] 'agreement among raft nodes before linearized reading'  (duration: 40.436224ms)","trace[1140840071] 'range keys from in-memory index tree'  (duration: 66.922084ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:41:19.145841Z","caller":"traceutil/trace.go:172","msg":"trace[329876065] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"146.869319ms","start":"2025-10-18T09:41:18.998927Z","end":"2025-10-18T09:41:19.145796Z","steps":["trace[329876065] 'process raft request'  (duration: 77.288627ms)","trace[329876065] 'compare'  (duration: 66.891713ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:41:19.550090Z","caller":"traceutil/trace.go:172","msg":"trace[778341082] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"261.30654ms","start":"2025-10-18T09:41:19.288767Z","end":"2025-10-18T09:41:19.550073Z","steps":["trace[778341082] 'process raft request'  (duration: 261.202289ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:41:19.707062Z","caller":"traceutil/trace.go:172","msg":"trace[2121977494] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"148.007042ms","start":"2025-10-18T09:41:19.559033Z","end":"2025-10-18T09:41:19.707040Z","steps":["trace[2121977494] 'process raft request'  (duration: 147.860496ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:41:25.155012Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"262.575037ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T09:41:25.155090Z","caller":"traceutil/trace.go:172","msg":"trace[683017480] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:403; }","duration":"262.675155ms","start":"2025-10-18T09:41:24.892398Z","end":"2025-10-18T09:41:25.155073Z","steps":["trace[683017480] 'range keys from in-memory index tree'  (duration: 262.527696ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:41:36 up  1:23,  0 user,  load average: 5.72, 2.61, 1.47
	Linux pause-238319 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fc0bb0d4fc4e65de13e98f313b9612efea7c1345f5805feb1a4a92b098766fe9] <==
	I1018 09:41:10.643338       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:41:10.643733       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:41:10.643923       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:41:10.643945       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:41:10.643971       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:41:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:41:10.918784       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:41:10.918836       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:41:10.918851       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:41:10.919025       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:41:11.319563       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:41:11.319601       1 metrics.go:72] Registering metrics
	I1018 09:41:11.319659       1 controller.go:711] "Syncing nftables rules"
	I1018 09:41:20.919459       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:41:20.919514       1 main.go:301] handling current node
	I1018 09:41:30.923810       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:41:30.923880       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8074b8d8db125c1763e6634b51d91476f5f4341b4314ab0d19ee17e915081239] <==
	I1018 09:41:01.912260       1 policy_source.go:240] refreshing policies
	E1018 09:41:01.951459       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1018 09:41:01.998898       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:41:02.004409       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:41:02.004981       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 09:41:02.014500       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:41:02.014888       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:41:02.103526       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:41:02.801676       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:41:02.805841       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:41:02.805918       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:41:03.379415       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:41:03.422560       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:41:03.507037       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:41:03.516053       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 09:41:03.517192       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:41:03.522732       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:41:03.897925       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:41:04.491793       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:41:04.503709       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:41:04.510882       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:41:09.550679       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:41:09.705400       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:41:09.710868       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:41:10.003404       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [be1d9fd168ccbc88a1ae4984ccaa690a5a284e7713511e3376e39c36c372f903] <==
	I1018 09:41:08.896910       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:41:08.896990       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:41:08.897000       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:41:08.897032       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:41:08.897203       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:41:08.897216       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 09:41:08.897233       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:41:08.897633       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:41:08.897688       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:41:08.897725       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:41:08.897870       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:41:08.898115       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:41:08.899312       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 09:41:08.899342       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 09:41:08.901603       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:41:08.906851       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:41:08.907889       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:41:08.924189       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:41:08.930537       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:41:08.936783       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:41:08.945857       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:41:08.948039       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:41:08.948054       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:41:08.948060       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:41:23.850644       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a019d95fa3490ac09c3720d197c684b3dd34f3ddd19c0eed1e63fe48cccfb2df] <==
	I1018 09:41:10.475562       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:41:10.548153       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:41:10.649434       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:41:10.649496       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:41:10.649749       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:41:10.675464       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:41:10.675542       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:41:10.682656       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:41:10.683243       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:41:10.683267       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:41:10.685085       1 config.go:200] "Starting service config controller"
	I1018 09:41:10.685618       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:41:10.685460       1 config.go:309] "Starting node config controller"
	I1018 09:41:10.685698       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:41:10.685705       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:41:10.685479       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:41:10.685714       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:41:10.685476       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:41:10.685727       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:41:10.786008       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:41:10.786049       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:41:10.786107       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [45e57534f6b2ee505cd92bbf9caca937d90fab1e893258907dd9b9d5c454863e] <==
	E1018 09:41:01.871418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:41:01.871536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:41:01.871546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:41:01.871602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:41:01.871229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:41:01.872750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:41:01.872783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:41:01.872862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:41:01.872970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:41:01.873032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:41:01.873101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:41:01.873101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:41:02.700846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:41:02.706843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:41:02.743623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:41:02.785320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:41:02.896152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:41:02.931654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:41:02.972258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:41:02.977905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:41:03.113987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:41:03.134159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:41:03.188556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:41:03.244790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1018 09:41:05.664600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:41:05 pause-238319 kubelet[1332]: I1018 09:41:05.424236    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-238319" podStartSLOduration=1.424212968 podStartE2EDuration="1.424212968s" podCreationTimestamp="2025-10-18 09:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:41:05.409929844 +0000 UTC m=+1.155030491" watchObservedRunningTime="2025-10-18 09:41:05.424212968 +0000 UTC m=+1.169313615"
	Oct 18 09:41:05 pause-238319 kubelet[1332]: I1018 09:41:05.424396    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-238319" podStartSLOduration=1.424386297 podStartE2EDuration="1.424386297s" podCreationTimestamp="2025-10-18 09:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:41:05.424097698 +0000 UTC m=+1.169198344" watchObservedRunningTime="2025-10-18 09:41:05.424386297 +0000 UTC m=+1.169486937"
	Oct 18 09:41:05 pause-238319 kubelet[1332]: I1018 09:41:05.443413    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-238319" podStartSLOduration=1.443372179 podStartE2EDuration="1.443372179s" podCreationTimestamp="2025-10-18 09:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:41:05.443218855 +0000 UTC m=+1.188319501" watchObservedRunningTime="2025-10-18 09:41:05.443372179 +0000 UTC m=+1.188472823"
	Oct 18 09:41:05 pause-238319 kubelet[1332]: I1018 09:41:05.443666    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-238319" podStartSLOduration=1.4436529089999999 podStartE2EDuration="1.443652909s" podCreationTimestamp="2025-10-18 09:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:41:05.433524355 +0000 UTC m=+1.178625015" watchObservedRunningTime="2025-10-18 09:41:05.443652909 +0000 UTC m=+1.188753555"
	Oct 18 09:41:08 pause-238319 kubelet[1332]: I1018 09:41:08.896056    1332 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 09:41:08 pause-238319 kubelet[1332]: I1018 09:41:08.897351    1332 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 09:41:10 pause-238319 kubelet[1332]: I1018 09:41:10.077639    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr9cc\" (UniqueName: \"kubernetes.io/projected/3b6484de-71d8-4a6c-93ba-2ae0eb18308b-kube-api-access-zr9cc\") pod \"kube-proxy-769dd\" (UID: \"3b6484de-71d8-4a6c-93ba-2ae0eb18308b\") " pod="kube-system/kube-proxy-769dd"
	Oct 18 09:41:10 pause-238319 kubelet[1332]: I1018 09:41:10.077701    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed-xtables-lock\") pod \"kindnet-w8lp5\" (UID: \"3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed\") " pod="kube-system/kindnet-w8lp5"
	Oct 18 09:41:10 pause-238319 kubelet[1332]: I1018 09:41:10.077727    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzxg8\" (UniqueName: \"kubernetes.io/projected/3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed-kube-api-access-bzxg8\") pod \"kindnet-w8lp5\" (UID: \"3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed\") " pod="kube-system/kindnet-w8lp5"
	Oct 18 09:41:10 pause-238319 kubelet[1332]: I1018 09:41:10.078477    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3b6484de-71d8-4a6c-93ba-2ae0eb18308b-kube-proxy\") pod \"kube-proxy-769dd\" (UID: \"3b6484de-71d8-4a6c-93ba-2ae0eb18308b\") " pod="kube-system/kube-proxy-769dd"
	Oct 18 09:41:10 pause-238319 kubelet[1332]: I1018 09:41:10.078583    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed-lib-modules\") pod \"kindnet-w8lp5\" (UID: \"3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed\") " pod="kube-system/kindnet-w8lp5"
	Oct 18 09:41:10 pause-238319 kubelet[1332]: I1018 09:41:10.078620    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b6484de-71d8-4a6c-93ba-2ae0eb18308b-xtables-lock\") pod \"kube-proxy-769dd\" (UID: \"3b6484de-71d8-4a6c-93ba-2ae0eb18308b\") " pod="kube-system/kube-proxy-769dd"
	Oct 18 09:41:10 pause-238319 kubelet[1332]: I1018 09:41:10.078663    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b6484de-71d8-4a6c-93ba-2ae0eb18308b-lib-modules\") pod \"kube-proxy-769dd\" (UID: \"3b6484de-71d8-4a6c-93ba-2ae0eb18308b\") " pod="kube-system/kube-proxy-769dd"
	Oct 18 09:41:10 pause-238319 kubelet[1332]: I1018 09:41:10.078687    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed-cni-cfg\") pod \"kindnet-w8lp5\" (UID: \"3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed\") " pod="kube-system/kindnet-w8lp5"
	Oct 18 09:41:11 pause-238319 kubelet[1332]: I1018 09:41:11.436969    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-769dd" podStartSLOduration=1.436945891 podStartE2EDuration="1.436945891s" podCreationTimestamp="2025-10-18 09:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:41:11.436329114 +0000 UTC m=+7.181429772" watchObservedRunningTime="2025-10-18 09:41:11.436945891 +0000 UTC m=+7.182046537"
	Oct 18 09:41:11 pause-238319 kubelet[1332]: I1018 09:41:11.437104    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-w8lp5" podStartSLOduration=1.437092484 podStartE2EDuration="1.437092484s" podCreationTimestamp="2025-10-18 09:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:41:11.426018995 +0000 UTC m=+7.171119641" watchObservedRunningTime="2025-10-18 09:41:11.437092484 +0000 UTC m=+7.182193130"
	Oct 18 09:41:21 pause-238319 kubelet[1332]: I1018 09:41:21.013218    1332 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 09:41:21 pause-238319 kubelet[1332]: I1018 09:41:21.161614    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvrwg\" (UniqueName: \"kubernetes.io/projected/6dc5a3a9-be4f-4bf3-8cf8-a516bd3e4867-kube-api-access-rvrwg\") pod \"coredns-66bc5c9577-lqmd8\" (UID: \"6dc5a3a9-be4f-4bf3-8cf8-a516bd3e4867\") " pod="kube-system/coredns-66bc5c9577-lqmd8"
	Oct 18 09:41:21 pause-238319 kubelet[1332]: I1018 09:41:21.161672    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6dc5a3a9-be4f-4bf3-8cf8-a516bd3e4867-config-volume\") pod \"coredns-66bc5c9577-lqmd8\" (UID: \"6dc5a3a9-be4f-4bf3-8cf8-a516bd3e4867\") " pod="kube-system/coredns-66bc5c9577-lqmd8"
	Oct 18 09:41:21 pause-238319 kubelet[1332]: I1018 09:41:21.450645    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lqmd8" podStartSLOduration=11.450618886 podStartE2EDuration="11.450618886s" podCreationTimestamp="2025-10-18 09:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:41:21.450191894 +0000 UTC m=+17.195292540" watchObservedRunningTime="2025-10-18 09:41:21.450618886 +0000 UTC m=+17.195719533"
	Oct 18 09:41:29 pause-238319 kubelet[1332]: E1018 09:41:29.379234    1332 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"
	Oct 18 09:41:32 pause-238319 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:41:32 pause-238319 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:41:32 pause-238319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:41:32 pause-238319 systemd[1]: kubelet.service: Consumed 1.244s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-238319 -n pause-238319
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-238319 -n pause-238319: exit status 2 (378.514013ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-238319 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-238319
helpers_test.go:243: (dbg) docker inspect pause-238319:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f3bf6c5c8f72fb2eae5ff5731f6de5de108cae5fb7c04d3d9c452481fc185427",
	        "Created": "2025-10-18T09:40:48.429110456Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 321963,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:40:48.471150144Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/f3bf6c5c8f72fb2eae5ff5731f6de5de108cae5fb7c04d3d9c452481fc185427/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f3bf6c5c8f72fb2eae5ff5731f6de5de108cae5fb7c04d3d9c452481fc185427/hostname",
	        "HostsPath": "/var/lib/docker/containers/f3bf6c5c8f72fb2eae5ff5731f6de5de108cae5fb7c04d3d9c452481fc185427/hosts",
	        "LogPath": "/var/lib/docker/containers/f3bf6c5c8f72fb2eae5ff5731f6de5de108cae5fb7c04d3d9c452481fc185427/f3bf6c5c8f72fb2eae5ff5731f6de5de108cae5fb7c04d3d9c452481fc185427-json.log",
	        "Name": "/pause-238319",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-238319:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-238319",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f3bf6c5c8f72fb2eae5ff5731f6de5de108cae5fb7c04d3d9c452481fc185427",
	                "LowerDir": "/var/lib/docker/overlay2/3a3a72bf37eb08584605d16bb39c287756db9a8e55bc82ed0b5cdbdd7347c598-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a3a72bf37eb08584605d16bb39c287756db9a8e55bc82ed0b5cdbdd7347c598/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a3a72bf37eb08584605d16bb39c287756db9a8e55bc82ed0b5cdbdd7347c598/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a3a72bf37eb08584605d16bb39c287756db9a8e55bc82ed0b5cdbdd7347c598/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-238319",
	                "Source": "/var/lib/docker/volumes/pause-238319/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-238319",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-238319",
	                "name.minikube.sigs.k8s.io": "pause-238319",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "27de14e01e5994941bb9f5343bc2d852cd66eccfdfea74f1509be9e7b3876d7b",
	            "SandboxKey": "/var/run/docker/netns/27de14e01e59",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-238319": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:b6:d7:55:8a:04",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc34b4af0845da6c802dc81c73f4b4277beaad88933c210bf42a502e8671cd1e",
	                    "EndpointID": "843886e85ecb6b6a98659ebcd94714e55c69b175e039798c71961a40a0c31534",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-238319",
	                        "f3bf6c5c8f72"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-238319 -n pause-238319
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-238319 -n pause-238319: exit status 2 (346.154932ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-238319 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-238319 logs -n 25: (1.114937308s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-345705 sudo cat /var/lib/kubelet/config.yaml                                                                      │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo systemctl status docker --all --full --no-pager                                                       │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo systemctl cat docker --no-pager                                                                       │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo cat /etc/docker/daemon.json                                                                           │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo docker system info                                                                                    │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo cri-dockerd --version                                                                                 │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo systemctl cat containerd --no-pager                                                                   │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo cat /etc/containerd/config.toml                                                                       │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo containerd config dump                                                                                │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo systemctl cat crio --no-pager                                                                         │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo crio config                                                                                           │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ delete  │ -p cilium-345705                                                                                                            │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p cert-expiration-650496 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-650496    │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ delete  │ -p running-upgrade-896586                                                                                                   │ running-upgrade-896586    │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p force-systemd-flag-565668 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-565668 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ start   │ -p pause-238319 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-238319              │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ pause   │ -p pause-238319 --alsologtostderr -v=5                                                                                      │ pause-238319              │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:41:24
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:41:24.288816  336575 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:41:24.289108  336575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:41:24.289121  336575 out.go:374] Setting ErrFile to fd 2...
	I1018 09:41:24.289129  336575 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:41:24.289366  336575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:41:24.289927  336575 out.go:368] Setting JSON to false
	I1018 09:41:24.291235  336575 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5028,"bootTime":1760775456,"procs":291,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:41:24.291321  336575 start.go:141] virtualization: kvm guest
	I1018 09:41:24.321398  336575 out.go:179] * [pause-238319] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:41:24.326846  336575 notify.go:220] Checking for updates...
	I1018 09:41:24.326903  336575 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:41:24.379170  336575 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:41:24.419393  336575 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:41:24.561406  336575 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:41:20.200590  331569 cli_runner.go:164] Run: docker network inspect missing-upgrade-631894 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:41:20.217919  331569 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 09:41:20.221965  331569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:41:20.234148  331569 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1018 09:41:20.234195  331569 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:41:20.306478  331569 crio.go:496] all images are preloaded for cri-o runtime.
	I1018 09:41:20.306491  331569 crio.go:415] Images already preloaded, skipping extraction
	I1018 09:41:20.306535  331569 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:41:20.357447  331569 crio.go:496] all images are preloaded for cri-o runtime.
	I1018 09:41:20.357464  331569 cache_images.go:84] Images are preloaded, skipping loading
	I1018 09:41:20.357535  331569 ssh_runner.go:195] Run: crio config
	I1018 09:41:20.407577  331569 cni.go:84] Creating CNI manager for ""
	I1018 09:41:20.407594  331569 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:41:20.407620  331569 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1018 09:41:20.407646  331569 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-631894 NodeName:missing-upgrade-631894 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:41:20.407836  331569 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "missing-upgrade-631894"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:41:20.407920  331569 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=missing-upgrade-631894 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-631894 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1018 09:41:20.407982  331569 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1018 09:41:20.418178  331569 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:41:20.418244  331569 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:41:20.428100  331569 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1018 09:41:20.448021  331569 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:41:20.472305  331569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1018 09:41:20.494071  331569 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:41:20.498312  331569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:41:20.511495  331569 certs.go:56] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894 for IP: 192.168.94.2
	I1018 09:41:20.511542  331569 certs.go:190] acquiring lock for shared ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:20.511712  331569 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:41:20.511748  331569 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:41:20.511801  331569 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/client.key
	I1018 09:41:20.511817  331569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/client.crt with IP's: []
	I1018 09:41:20.696367  331569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/client.crt ...
	I1018 09:41:20.696386  331569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/client.crt: {Name:mk51bf5afbe904b78b9574c2fb9cadd5afabe338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:20.696586  331569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/client.key ...
	I1018 09:41:20.696612  331569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/client.key: {Name:mkedc03b8ae5cc6d524aeeda020e1557303aa579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:20.696747  331569 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.key.ad8e880a
	I1018 09:41:20.696764  331569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.crt.ad8e880a with IP's: [192.168.94.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1018 09:41:21.030819  331569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.crt.ad8e880a ...
	I1018 09:41:21.030858  331569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.crt.ad8e880a: {Name:mk7a2d75cdb4fca07c179be3f5b6d3b1671ef307 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:21.031052  331569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.key.ad8e880a ...
	I1018 09:41:21.031068  331569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.key.ad8e880a: {Name:mk7c78659988893a4580797ea91a9a97127e2e5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:21.031170  331569 certs.go:337] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.crt.ad8e880a -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.crt
	I1018 09:41:21.031255  331569 certs.go:341] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.key.ad8e880a -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.key
	I1018 09:41:21.031317  331569 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/proxy-client.key
	I1018 09:41:21.031331  331569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/proxy-client.crt with IP's: []
	I1018 09:41:21.461630  331569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/proxy-client.crt ...
	I1018 09:41:21.461653  331569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/proxy-client.crt: {Name:mke8614aad72b5f639121966ef3fa66b60af1af9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:21.461817  331569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/proxy-client.key ...
	I1018 09:41:21.461849  331569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/proxy-client.key: {Name:mka98cfff82147a59349ca9ee298e41761c35c11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:21.462064  331569 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:41:21.462104  331569 certs.go:433] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:41:21.462117  331569 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:41:21.462150  331569 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:41:21.462271  331569 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:41:21.462337  331569 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:41:21.462384  331569 certs.go:437] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:41:21.463109  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1018 09:41:21.490725  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:41:21.516775  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:41:21.547113  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/missing-upgrade-631894/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:41:21.579327  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:41:21.607879  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:41:21.634263  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:41:21.660707  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:41:21.686408  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:41:21.716439  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:41:21.744025  331569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:41:21.769772  331569 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:41:21.788480  331569 ssh_runner.go:195] Run: openssl version
	I1018 09:41:21.794743  331569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:41:21.805844  331569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:41:21.809941  331569 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:41:21.809987  331569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:41:21.817167  331569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:41:21.828232  331569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:41:21.838890  331569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:21.843051  331569 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:21.843110  331569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:21.850177  331569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:41:21.860873  331569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:41:21.871434  331569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:41:21.875621  331569 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:41:21.875676  331569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:41:21.882682  331569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:41:21.893020  331569 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1018 09:41:21.896854  331569 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1018 09:41:21.896923  331569 kubeadm.go:404] StartCluster: {Name:missing-upgrade-631894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-631894 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1018 09:41:21.897107  331569 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:41:21.897165  331569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:41:21.934386  331569 cri.go:89] found id: ""
	I1018 09:41:21.934455  331569 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:41:21.944153  331569 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:41:21.953812  331569 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:41:21.953878  331569 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:41:21.963251  331569 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:41:21.963299  331569 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:41:22.049049  331569 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:41:22.121306  331569 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:41:24.935365  336575 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:41:25.155096  336575 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:41:25.189539  336575 config.go:182] Loaded profile config "pause-238319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:41:25.190207  336575 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:41:25.218158  336575 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:41:25.218247  336575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:41:25.276799  336575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:82 SystemTime:2025-10-18 09:41:25.266869533 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:41:25.276928  336575 docker.go:318] overlay module found
	I1018 09:41:25.458534  336575 out.go:179] * Using the docker driver based on existing profile
	I1018 09:41:25.496958  336575 start.go:305] selected driver: docker
	I1018 09:41:25.496984  336575 start.go:925] validating driver "docker" against &{Name:pause-238319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-238319 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:41:25.497140  336575 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:41:25.497226  336575 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:41:25.549690  336575 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:82 SystemTime:2025-10-18 09:41:25.540665834 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:41:25.550556  336575 cni.go:84] Creating CNI manager for ""
	I1018 09:41:25.550626  336575 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:41:25.550683  336575 start.go:349] cluster config:
	{Name:pause-238319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-238319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:41:20.780280  335228 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 09:41:20.780515  335228 start.go:159] libmachine.API.Create for "force-systemd-flag-565668" (driver="docker")
	I1018 09:41:20.780612  335228 client.go:168] LocalClient.Create starting
	I1018 09:41:20.780694  335228 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem
	I1018 09:41:20.780736  335228 main.go:141] libmachine: Decoding PEM data...
	I1018 09:41:20.780763  335228 main.go:141] libmachine: Parsing certificate...
	I1018 09:41:20.780862  335228 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem
	I1018 09:41:20.780896  335228 main.go:141] libmachine: Decoding PEM data...
	I1018 09:41:20.780913  335228 main.go:141] libmachine: Parsing certificate...
	I1018 09:41:20.781243  335228 cli_runner.go:164] Run: docker network inspect force-systemd-flag-565668 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:41:20.800540  335228 cli_runner.go:211] docker network inspect force-systemd-flag-565668 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:41:20.800623  335228 network_create.go:284] running [docker network inspect force-systemd-flag-565668] to gather additional debugging logs...
	I1018 09:41:20.800650  335228 cli_runner.go:164] Run: docker network inspect force-systemd-flag-565668
	W1018 09:41:20.825202  335228 cli_runner.go:211] docker network inspect force-systemd-flag-565668 returned with exit code 1
	I1018 09:41:20.825245  335228 network_create.go:287] error running [docker network inspect force-systemd-flag-565668]: docker network inspect force-systemd-flag-565668: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-565668 not found
	I1018 09:41:20.825263  335228 network_create.go:289] output of [docker network inspect force-systemd-flag-565668]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-565668 not found
	
	** /stderr **
	I1018 09:41:20.825419  335228 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:41:20.849770  335228 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-feaba2b58bf0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:21:45:2e:39:fd} reservation:<nil>}
	I1018 09:41:20.850277  335228 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-159189dc4cae IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:66:7d:df:93:8a:95} reservation:<nil>}
	I1018 09:41:20.850905  335228 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-34d26817ecdb IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:18:1e:90:91:d0} reservation:<nil>}
	I1018 09:41:20.851471  335228 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-cc34b4af0845 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:e2:10:92:51:72:61} reservation:<nil>}
	I1018 09:41:20.852311  335228 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cb7d90}
	I1018 09:41:20.852333  335228 network_create.go:124] attempt to create docker network force-systemd-flag-565668 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1018 09:41:20.852376  335228 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-565668 force-systemd-flag-565668
	I1018 09:41:20.935790  335228 network_create.go:108] docker network force-systemd-flag-565668 192.168.85.0/24 created
	I1018 09:41:20.935852  335228 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-565668" container
	I1018 09:41:20.935932  335228 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:41:20.958263  335228 cli_runner.go:164] Run: docker volume create force-systemd-flag-565668 --label name.minikube.sigs.k8s.io=force-systemd-flag-565668 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:41:20.977754  335228 oci.go:103] Successfully created a docker volume force-systemd-flag-565668
	I1018 09:41:20.977842  335228 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-565668-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-565668 --entrypoint /usr/bin/test -v force-systemd-flag-565668:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:41:21.396338  335228 oci.go:107] Successfully prepared a docker volume force-systemd-flag-565668
	I1018 09:41:21.396397  335228 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:41:21.396437  335228 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 09:41:21.396517  335228 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-565668:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 09:41:25.592507  336575 out.go:179] * Starting "pause-238319" primary control-plane node in "pause-238319" cluster
	I1018 09:41:25.595848  336575 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:41:25.624994  336575 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:41:25.857412  336575 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:41:25.857418  336575 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:41:25.857493  336575 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:41:25.857505  336575 cache.go:58] Caching tarball of preloaded images
	I1018 09:41:25.857607  336575 preload.go:233] Found /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:41:25.857624  336575 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:41:25.857781  336575 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/config.json ...
	I1018 09:41:25.877715  336575 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:41:25.877739  336575 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:41:25.877758  336575 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:41:25.877788  336575 start.go:360] acquireMachinesLock for pause-238319: {Name:mkcd41232403b5a8a9e87ba238de3b17794afc29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:41:25.877871  336575 start.go:364] duration metric: took 58.249µs to acquireMachinesLock for "pause-238319"
	I1018 09:41:25.877896  336575 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:41:25.877906  336575 fix.go:54] fixHost starting: 
	I1018 09:41:25.878131  336575 cli_runner.go:164] Run: docker container inspect pause-238319 --format={{.State.Status}}
	I1018 09:41:25.896685  336575 fix.go:112] recreateIfNeeded on pause-238319: state=Running err=<nil>
	W1018 09:41:25.896713  336575 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:41:24.027073  332699 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-650496
	
	I1018 09:41:24.027093  332699 ubuntu.go:182] provisioning hostname "cert-expiration-650496"
	I1018 09:41:24.027186  332699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:41:24.045073  332699 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:24.045272  332699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1018 09:41:24.045279  332699 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-650496 && echo "cert-expiration-650496" | sudo tee /etc/hostname
	I1018 09:41:24.241542  332699 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-650496
	
	I1018 09:41:24.241655  332699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:41:24.262040  332699 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:24.262241  332699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1018 09:41:24.262252  332699 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-650496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-650496/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-650496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:41:24.398522  332699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:41:24.398552  332699 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:41:24.398574  332699 ubuntu.go:190] setting up certificates
	I1018 09:41:24.398594  332699 provision.go:84] configureAuth start
	I1018 09:41:24.398657  332699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-650496
	I1018 09:41:24.418619  332699 provision.go:143] copyHostCerts
	I1018 09:41:24.418679  332699 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:41:24.418688  332699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:41:24.418762  332699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:41:24.418922  332699 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:41:24.418929  332699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:41:24.418970  332699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:41:24.419063  332699 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:41:24.419069  332699 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:41:24.419116  332699 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:41:24.419191  332699 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-650496 san=[127.0.0.1 192.168.103.2 cert-expiration-650496 localhost minikube]
	I1018 09:41:24.559927  332699 provision.go:177] copyRemoteCerts
	I1018 09:41:24.559988  332699 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:41:24.560023  332699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:41:24.580908  332699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/cert-expiration-650496/id_rsa Username:docker}
	I1018 09:41:24.677388  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:41:24.972623  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:41:24.990481  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 09:41:25.008127  332699 provision.go:87] duration metric: took 609.51711ms to configureAuth
	I1018 09:41:25.008150  332699 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:41:25.008324  332699 config.go:182] Loaded profile config "cert-expiration-650496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:41:25.008411  332699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:41:25.025506  332699 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:25.025718  332699 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1018 09:41:25.025729  332699 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:41:25.746005  332699 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:41:25.746020  332699 machine.go:96] duration metric: took 4.878467225s to provisionDockerMachine
	I1018 09:41:25.746033  332699 client.go:171] duration metric: took 13.306226966s to LocalClient.Create
	I1018 09:41:25.746050  332699 start.go:167] duration metric: took 13.306288297s to libmachine.API.Create "cert-expiration-650496"
	I1018 09:41:25.746056  332699 start.go:293] postStartSetup for "cert-expiration-650496" (driver="docker")
	I1018 09:41:25.746064  332699 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:41:25.746114  332699 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:41:25.746145  332699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:41:25.763700  332699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/cert-expiration-650496/id_rsa Username:docker}
	I1018 09:41:25.880574  332699 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:41:25.884784  332699 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:41:25.884807  332699 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:41:25.884817  332699 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:41:25.884882  332699 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:41:25.884973  332699 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:41:25.885088  332699 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:41:25.894316  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:41:25.918934  332699 start.go:296] duration metric: took 172.865255ms for postStartSetup
	I1018 09:41:25.919340  332699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-650496
	I1018 09:41:25.946852  332699 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/config.json ...
	I1018 09:41:25.947170  332699 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:41:25.947218  332699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:41:25.969944  332699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/cert-expiration-650496/id_rsa Username:docker}
	I1018 09:41:26.078482  332699 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:41:26.085016  332699 start.go:128] duration metric: took 13.648693998s to createHost
	I1018 09:41:26.085037  332699 start.go:83] releasing machines lock for "cert-expiration-650496", held for 13.648836289s
	I1018 09:41:26.085112  332699 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-650496
	I1018 09:41:26.114457  332699 ssh_runner.go:195] Run: cat /version.json
	I1018 09:41:26.114510  332699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:41:26.114939  332699 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:41:26.115017  332699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:41:26.145064  332699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/cert-expiration-650496/id_rsa Username:docker}
	I1018 09:41:26.147469  332699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/cert-expiration-650496/id_rsa Username:docker}
	I1018 09:41:26.346256  332699 ssh_runner.go:195] Run: systemctl --version
	I1018 09:41:26.355486  332699 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:41:26.407233  332699 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:41:26.417400  332699 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:41:26.417466  332699 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:41:26.451557  332699 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:41:26.451574  332699 start.go:495] detecting cgroup driver to use...
	I1018 09:41:26.451645  332699 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:41:26.451790  332699 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:41:26.475528  332699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:41:26.495396  332699 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:41:26.495445  332699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:41:26.520655  332699 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:41:26.542990  332699 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:41:26.697286  332699 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:41:26.863314  332699 docker.go:234] disabling docker service ...
	I1018 09:41:26.863385  332699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:41:26.893807  332699 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:41:26.915405  332699 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:41:27.058442  332699 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:41:27.185781  332699 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:41:27.201519  332699 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:41:27.220479  332699 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:41:27.220543  332699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:27.237193  332699 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:41:27.237244  332699 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:27.249155  332699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:27.262016  332699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:27.273886  332699 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:41:27.285771  332699 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:27.297976  332699 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:27.315361  332699 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:27.326115  332699 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:41:27.336149  332699 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:41:27.346189  332699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:41:27.454768  332699 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:41:27.568174  332699 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:41:27.568229  332699 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:41:27.573567  332699 start.go:563] Will wait 60s for crictl version
	I1018 09:41:27.573613  332699 ssh_runner.go:195] Run: which crictl
	I1018 09:41:27.578332  332699 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:41:27.606406  332699 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:41:27.606476  332699 ssh_runner.go:195] Run: crio --version
	I1018 09:41:27.636884  332699 ssh_runner.go:195] Run: crio --version
	I1018 09:41:27.674068  332699 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:41:25.898750  336575 out.go:252] * Updating the running docker "pause-238319" container ...
	I1018 09:41:25.898790  336575 machine.go:93] provisionDockerMachine start ...
	I1018 09:41:25.898910  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:25.920660  336575 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:25.921014  336575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I1018 09:41:25.921041  336575 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:41:26.079806  336575 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-238319
	
	I1018 09:41:26.079854  336575 ubuntu.go:182] provisioning hostname "pause-238319"
	I1018 09:41:26.079910  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:26.105645  336575 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:26.106231  336575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I1018 09:41:26.106262  336575 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-238319 && echo "pause-238319" | sudo tee /etc/hostname
	I1018 09:41:26.286316  336575 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-238319
	
	I1018 09:41:26.286403  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:26.309023  336575 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:26.309356  336575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I1018 09:41:26.309388  336575 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-238319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-238319/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-238319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:41:26.463175  336575 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:41:26.463208  336575 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:41:26.463232  336575 ubuntu.go:190] setting up certificates
	I1018 09:41:26.463243  336575 provision.go:84] configureAuth start
	I1018 09:41:26.463303  336575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-238319
	I1018 09:41:26.487898  336575 provision.go:143] copyHostCerts
	I1018 09:41:26.487983  336575 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:41:26.488004  336575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:41:26.488088  336575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:41:26.488929  336575 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:41:26.488944  336575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:41:26.488995  336575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:41:26.489109  336575 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:41:26.489116  336575 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:41:26.489151  336575 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:41:26.489242  336575 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.pause-238319 san=[127.0.0.1 192.168.76.2 localhost minikube pause-238319]
	I1018 09:41:26.775177  336575 provision.go:177] copyRemoteCerts
	I1018 09:41:26.775309  336575 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:41:26.775364  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:26.811423  336575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/pause-238319/id_rsa Username:docker}
	I1018 09:41:26.939068  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:41:26.970564  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1018 09:41:27.002140  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:41:27.025634  336575 provision.go:87] duration metric: took 562.375747ms to configureAuth
	I1018 09:41:27.025767  336575 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:41:27.026093  336575 config.go:182] Loaded profile config "pause-238319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:41:27.026234  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:27.052747  336575 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:27.053138  336575 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33126 <nil> <nil>}
	I1018 09:41:27.053168  336575 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:41:27.422411  336575 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:41:27.422444  336575 machine.go:96] duration metric: took 1.523644304s to provisionDockerMachine
	I1018 09:41:27.422458  336575 start.go:293] postStartSetup for "pause-238319" (driver="docker")
	I1018 09:41:27.422472  336575 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:41:27.422559  336575 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:41:27.422607  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:27.443115  336575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/pause-238319/id_rsa Username:docker}
	I1018 09:41:27.546048  336575 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:41:27.550484  336575 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:41:27.550517  336575 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:41:27.550529  336575 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:41:27.550595  336575 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:41:27.550698  336575 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:41:27.550911  336575 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:41:27.561481  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:41:27.583852  336575 start.go:296] duration metric: took 161.37521ms for postStartSetup
	I1018 09:41:27.583931  336575 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:41:27.583985  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:27.605872  336575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/pause-238319/id_rsa Username:docker}
	I1018 09:41:27.706053  336575 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:41:27.711912  336575 fix.go:56] duration metric: took 1.834000888s for fixHost
	I1018 09:41:27.711953  336575 start.go:83] releasing machines lock for "pause-238319", held for 1.834053861s
	I1018 09:41:27.712015  336575 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-238319
	I1018 09:41:27.732355  336575 ssh_runner.go:195] Run: cat /version.json
	I1018 09:41:27.732414  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:27.732437  336575 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:41:27.732516  336575 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-238319
	I1018 09:41:27.754065  336575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/pause-238319/id_rsa Username:docker}
	I1018 09:41:27.754645  336575 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33126 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/pause-238319/id_rsa Username:docker}
	I1018 09:41:27.930889  336575 ssh_runner.go:195] Run: systemctl --version
	I1018 09:41:27.941675  336575 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:41:28.003105  336575 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:41:28.008601  336575 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:41:28.008677  336575 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:41:28.017411  336575 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:41:28.017435  336575 start.go:495] detecting cgroup driver to use...
	I1018 09:41:28.017466  336575 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:41:28.017507  336575 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:41:28.033788  336575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:41:28.049538  336575 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:41:28.049606  336575 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:41:28.086143  336575 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:41:28.108551  336575 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:41:28.255769  336575 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:41:28.381755  336575 docker.go:234] disabling docker service ...
	I1018 09:41:28.381854  336575 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:41:28.399715  336575 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:41:28.414175  336575 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:41:28.561962  336575 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:41:28.682695  336575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:41:28.695669  336575 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:41:28.711636  336575 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:41:28.711702  336575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:28.721965  336575 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:41:28.722033  336575 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:28.731294  336575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:28.740583  336575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:28.749658  336575 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:41:28.757880  336575 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:28.768182  336575 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:28.777385  336575 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:28.786720  336575 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:41:28.794740  336575 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:41:28.802630  336575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:41:28.952585  336575 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:41:29.122922  336575 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:41:29.123001  336575 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:41:29.128177  336575 start.go:563] Will wait 60s for crictl version
	I1018 09:41:29.128247  336575 ssh_runner.go:195] Run: which crictl
	I1018 09:41:29.132650  336575 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:41:29.164957  336575 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:41:29.165038  336575 ssh_runner.go:195] Run: crio --version
	I1018 09:41:29.200020  336575 ssh_runner.go:195] Run: crio --version
	I1018 09:41:29.242943  336575 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:41:29.244452  336575 cli_runner.go:164] Run: docker network inspect pause-238319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:41:29.265791  336575 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:41:29.270622  336575 kubeadm.go:883] updating cluster {Name:pause-238319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-238319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:41:29.270798  336575 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:41:29.270885  336575 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:41:25.906623  335228 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-565668:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.510057539s)
	I1018 09:41:25.906663  335228 kic.go:203] duration metric: took 4.510234729s to extract preloaded images to volume ...
	W1018 09:41:25.906758  335228 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 09:41:25.906811  335228 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 09:41:25.906887  335228 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:41:25.982498  335228 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-565668 --name force-systemd-flag-565668 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-565668 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-565668 --network force-systemd-flag-565668 --ip 192.168.85.2 --volume force-systemd-flag-565668:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:41:26.349427  335228 cli_runner.go:164] Run: docker container inspect force-systemd-flag-565668 --format={{.State.Running}}
	I1018 09:41:26.375989  335228 cli_runner.go:164] Run: docker container inspect force-systemd-flag-565668 --format={{.State.Status}}
	I1018 09:41:26.397381  335228 cli_runner.go:164] Run: docker exec force-systemd-flag-565668 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:41:26.455156  335228 oci.go:144] the created container "force-systemd-flag-565668" has a running status.
	I1018 09:41:26.455191  335228 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/force-systemd-flag-565668/id_rsa...
	I1018 09:41:26.861223  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/force-systemd-flag-565668/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1018 09:41:26.861333  335228 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-131066/.minikube/machines/force-systemd-flag-565668/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:41:26.896938  335228 cli_runner.go:164] Run: docker container inspect force-systemd-flag-565668 --format={{.State.Status}}
	I1018 09:41:26.924676  335228 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:41:26.924736  335228 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-565668 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:41:26.992300  335228 cli_runner.go:164] Run: docker container inspect force-systemd-flag-565668 --format={{.State.Status}}
	I1018 09:41:27.017072  335228 machine.go:93] provisionDockerMachine start ...
	I1018 09:41:27.017271  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:27.043441  335228 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:27.043787  335228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33154 <nil> <nil>}
	I1018 09:41:27.043802  335228 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:41:27.210715  335228 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-565668
	
	I1018 09:41:27.210749  335228 ubuntu.go:182] provisioning hostname "force-systemd-flag-565668"
	I1018 09:41:27.210813  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:27.235014  335228 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:27.235318  335228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33154 <nil> <nil>}
	I1018 09:41:27.235340  335228 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-565668 && echo "force-systemd-flag-565668" | sudo tee /etc/hostname
	I1018 09:41:27.404859  335228 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-565668
	
	I1018 09:41:27.404961  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:27.426277  335228 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:27.426555  335228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33154 <nil> <nil>}
	I1018 09:41:27.426575  335228 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-565668' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-565668/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-565668' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:41:27.570020  335228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:41:27.570051  335228 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:41:27.570076  335228 ubuntu.go:190] setting up certificates
	I1018 09:41:27.570089  335228 provision.go:84] configureAuth start
	I1018 09:41:27.570148  335228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-565668
	I1018 09:41:27.590488  335228 provision.go:143] copyHostCerts
	I1018 09:41:27.590530  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:41:27.590571  335228 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:41:27.590583  335228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:41:27.590669  335228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:41:27.590787  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:41:27.590816  335228 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:41:27.590838  335228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:41:27.590882  335228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:41:27.590960  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:41:27.590985  335228 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:41:27.590991  335228 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:41:27.591033  335228 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:41:27.591108  335228 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-565668 san=[127.0.0.1 192.168.85.2 force-systemd-flag-565668 localhost minikube]
	I1018 09:41:28.126042  335228 provision.go:177] copyRemoteCerts
	I1018 09:41:28.126124  335228 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:41:28.126173  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:28.148088  335228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/force-systemd-flag-565668/id_rsa Username:docker}
	I1018 09:41:28.258378  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1018 09:41:28.258450  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1018 09:41:28.278583  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1018 09:41:28.278643  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:41:28.301960  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1018 09:41:28.302030  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:41:28.321812  335228 provision.go:87] duration metric: took 751.709103ms to configureAuth
	I1018 09:41:28.321855  335228 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:41:28.322027  335228 config.go:182] Loaded profile config "force-systemd-flag-565668": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:41:28.322141  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:28.341102  335228 main.go:141] libmachine: Using SSH client type: native
	I1018 09:41:28.341445  335228 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33154 <nil> <nil>}
	I1018 09:41:28.341473  335228 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:41:28.584690  335228 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:41:28.584717  335228 machine.go:96] duration metric: took 1.567540681s to provisionDockerMachine
	I1018 09:41:28.584728  335228 client.go:171] duration metric: took 7.804104974s to LocalClient.Create
	I1018 09:41:28.584748  335228 start.go:167] duration metric: took 7.804233999s to libmachine.API.Create "force-systemd-flag-565668"
	I1018 09:41:28.584757  335228 start.go:293] postStartSetup for "force-systemd-flag-565668" (driver="docker")
	I1018 09:41:28.584771  335228 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:41:28.584867  335228 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:41:28.584919  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:28.608731  335228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/force-systemd-flag-565668/id_rsa Username:docker}
	I1018 09:41:28.713530  335228 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:41:28.717633  335228 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:41:28.717672  335228 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:41:28.717686  335228 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:41:28.717739  335228 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:41:28.717873  335228 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:41:28.717890  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> /etc/ssl/certs/1346112.pem
	I1018 09:41:28.718013  335228 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:41:28.726384  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:41:28.747428  335228 start.go:296] duration metric: took 162.656191ms for postStartSetup
	I1018 09:41:28.747790  335228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-565668
	I1018 09:41:28.766782  335228 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/config.json ...
	I1018 09:41:28.767072  335228 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:41:28.767127  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:28.785750  335228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/force-systemd-flag-565668/id_rsa Username:docker}
	I1018 09:41:28.879940  335228 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:41:28.885794  335228 start.go:128] duration metric: took 8.108001274s to createHost
	I1018 09:41:28.885834  335228 start.go:83] releasing machines lock for "force-systemd-flag-565668", held for 8.108168072s
	I1018 09:41:28.885924  335228 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-565668
	I1018 09:41:28.906081  335228 ssh_runner.go:195] Run: cat /version.json
	I1018 09:41:28.906119  335228 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:41:28.906142  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:28.906179  335228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-565668
	I1018 09:41:28.928580  335228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/force-systemd-flag-565668/id_rsa Username:docker}
	I1018 09:41:28.928580  335228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33154 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/force-systemd-flag-565668/id_rsa Username:docker}
	I1018 09:41:29.088690  335228 ssh_runner.go:195] Run: systemctl --version
	I1018 09:41:29.096624  335228 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:41:29.143293  335228 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:41:29.148992  335228 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:41:29.149068  335228 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:41:29.181007  335228 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:41:29.181051  335228 start.go:495] detecting cgroup driver to use...
	I1018 09:41:29.181067  335228 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1018 09:41:29.181127  335228 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:41:29.202497  335228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:41:29.217004  335228 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:41:29.217065  335228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:41:29.239158  335228 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:41:29.262878  335228 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:41:29.377249  335228 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:41:29.502228  335228 docker.go:234] disabling docker service ...
	I1018 09:41:29.502289  335228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:41:29.530588  335228 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:41:29.553036  335228 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:41:29.661258  335228 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:41:29.775732  335228 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:41:29.789585  335228 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:41:29.805788  335228 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:41:29.805866  335228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:29.816039  335228 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:41:29.816101  335228 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:29.825304  335228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:29.834762  335228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:29.844605  335228 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:41:29.852985  335228 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:29.861672  335228 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:29.875705  335228 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:41:29.884621  335228 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:41:29.892930  335228 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:41:29.901654  335228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:41:29.990359  335228 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:41:30.099585  335228 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:41:30.099717  335228 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:41:30.104409  335228 start.go:563] Will wait 60s for crictl version
	I1018 09:41:30.104463  335228 ssh_runner.go:195] Run: which crictl
	I1018 09:41:30.109170  335228 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:41:30.136691  335228 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:41:30.136769  335228 ssh_runner.go:195] Run: crio --version
	I1018 09:41:30.171475  335228 ssh_runner.go:195] Run: crio --version
	I1018 09:41:30.210563  335228 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:41:29.317255  336575 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:41:29.317284  336575 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:41:29.317340  336575 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:41:29.349766  336575 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:41:29.349795  336575 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:41:29.349813  336575 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 09:41:29.350004  336575 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-238319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-238319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:41:29.350107  336575 ssh_runner.go:195] Run: crio config
	I1018 09:41:29.417213  336575 cni.go:84] Creating CNI manager for ""
	I1018 09:41:29.417244  336575 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:41:29.417277  336575 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:41:29.417312  336575 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-238319 NodeName:pause-238319 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:41:29.417474  336575 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-238319"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:41:29.417539  336575 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:41:29.432487  336575 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:41:29.432576  336575 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:41:29.442868  336575 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1018 09:41:29.459698  336575 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:41:29.476036  336575 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1018 09:41:29.492273  336575 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:41:29.497416  336575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:41:29.656704  336575 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:41:29.672647  336575 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319 for IP: 192.168.76.2
	I1018 09:41:29.672681  336575 certs.go:195] generating shared ca certs ...
	I1018 09:41:29.672704  336575 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:29.673052  336575 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:41:29.673231  336575 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:41:29.673255  336575 certs.go:257] generating profile certs ...
	I1018 09:41:29.673388  336575 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/client.key
	I1018 09:41:29.673465  336575 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/apiserver.key.eeadefb0
	I1018 09:41:29.673531  336575 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/proxy-client.key
	I1018 09:41:29.673684  336575 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:41:29.673881  336575 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:41:29.673898  336575 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:41:29.673935  336575 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:41:29.674012  336575 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:41:29.674059  336575 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:41:29.674122  336575 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:41:29.675008  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:41:29.698315  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:41:29.725733  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:41:29.745389  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:41:29.765486  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1018 09:41:29.783664  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:41:29.802929  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:41:29.821767  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:41:29.840667  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:41:29.858848  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:41:29.876721  336575 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:41:29.896015  336575 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:41:29.911439  336575 ssh_runner.go:195] Run: openssl version
	I1018 09:41:29.918864  336575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:41:29.927544  336575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:41:29.931361  336575 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:41:29.931421  336575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:41:29.972177  336575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:41:29.980642  336575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:41:29.989184  336575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:41:29.993396  336575 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:41:29.993455  336575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:41:30.032252  336575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:41:30.041627  336575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:41:30.050971  336575 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:30.054865  336575 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:30.054924  336575 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:30.096459  336575 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:41:30.106079  336575 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:41:30.110776  336575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:41:30.158072  336575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:41:30.208615  336575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:41:30.250955  336575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:41:30.290793  336575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:41:30.333815  336575 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:41:30.372753  336575 kubeadm.go:400] StartCluster: {Name:pause-238319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-238319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:41:30.372904  336575 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:41:30.372975  336575 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:41:30.408734  336575 cri.go:89] found id: "cb50c5561b8a28159de66c6e421c73e788439a08f5d404a6e136e4b904504795"
	I1018 09:41:30.408759  336575 cri.go:89] found id: "fc0bb0d4fc4e65de13e98f313b9612efea7c1345f5805feb1a4a92b098766fe9"
	I1018 09:41:30.408764  336575 cri.go:89] found id: "a019d95fa3490ac09c3720d197c684b3dd34f3ddd19c0eed1e63fe48cccfb2df"
	I1018 09:41:30.408777  336575 cri.go:89] found id: "8074b8d8db125c1763e6634b51d91476f5f4341b4314ab0d19ee17e915081239"
	I1018 09:41:30.408782  336575 cri.go:89] found id: "ab8c2763e1457e03e4fe2887dc9efa7d087b973ed3f586c84497a6659af10703"
	I1018 09:41:30.408788  336575 cri.go:89] found id: "45e57534f6b2ee505cd92bbf9caca937d90fab1e893258907dd9b9d5c454863e"
	I1018 09:41:30.408793  336575 cri.go:89] found id: "be1d9fd168ccbc88a1ae4984ccaa690a5a284e7713511e3376e39c36c372f903"
	I1018 09:41:30.408797  336575 cri.go:89] found id: ""
	I1018 09:41:30.408855  336575 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:41:30.424218  336575 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:41:30Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:41:30.424304  336575 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:41:30.433618  336575 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:41:30.433641  336575 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:41:30.433696  336575 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:41:30.442013  336575 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:41:30.442485  336575 kubeconfig.go:125] found "pause-238319" server: "https://192.168.76.2:8443"
	I1018 09:41:30.443106  336575 kapi.go:59] client config for pause-238319: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/client.crt", KeyFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/client.key", CAFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:41:30.443527  336575 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 09:41:30.443543  336575 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 09:41:30.443548  336575 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 09:41:30.443551  336575 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 09:41:30.443556  336575 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 09:41:30.443941  336575 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:41:30.452328  336575 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 09:41:30.452357  336575 kubeadm.go:601] duration metric: took 18.709515ms to restartPrimaryControlPlane
	I1018 09:41:30.452367  336575 kubeadm.go:402] duration metric: took 79.62592ms to StartCluster
	I1018 09:41:30.452385  336575 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:30.452450  336575 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:41:30.453231  336575 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:30.453468  336575 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:41:30.453545  336575 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:41:30.453783  336575 config.go:182] Loaded profile config "pause-238319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:41:30.455709  336575 out.go:179] * Verifying Kubernetes components...
	I1018 09:41:30.455712  336575 out.go:179] * Enabled addons: 
	I1018 09:41:30.211670  335228 cli_runner.go:164] Run: docker network inspect force-systemd-flag-565668 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:41:30.230651  335228 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1018 09:41:30.235062  335228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:41:30.246355  335228 kubeadm.go:883] updating cluster {Name:force-systemd-flag-565668 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-565668 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:41:30.246502  335228 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:41:30.246567  335228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:41:30.283650  335228 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:41:30.283669  335228 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:41:30.283713  335228 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:41:30.312418  335228 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:41:30.312443  335228 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:41:30.312453  335228 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1018 09:41:30.312562  335228 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-565668 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-565668 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:41:30.312641  335228 ssh_runner.go:195] Run: crio config
	I1018 09:41:30.369584  335228 cni.go:84] Creating CNI manager for ""
	I1018 09:41:30.369611  335228 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:41:30.369633  335228 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:41:30.369665  335228 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-565668 NodeName:force-systemd-flag-565668 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:41:30.369781  335228 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-565668"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:41:30.369867  335228 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:41:30.378706  335228 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:41:30.378769  335228 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:41:30.387373  335228 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1018 09:41:30.401502  335228 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:41:30.420053  335228 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1018 09:41:30.435752  335228 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:41:30.439482  335228 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:41:30.450328  335228 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:41:30.548764  335228 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:41:31.356932  331569 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1018 09:41:31.357006  331569 kubeadm.go:322] [preflight] Running pre-flight checks
	I1018 09:41:31.357142  331569 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:41:31.357220  331569 kubeadm.go:322] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:41:31.357256  331569 kubeadm.go:322] OS: Linux
	I1018 09:41:31.357292  331569 kubeadm.go:322] CGROUPS_CPU: enabled
	I1018 09:41:31.357332  331569 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I1018 09:41:31.357369  331569 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I1018 09:41:31.357407  331569 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I1018 09:41:31.357443  331569 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I1018 09:41:31.357526  331569 kubeadm.go:322] CGROUPS_PIDS: enabled
	I1018 09:41:31.357594  331569 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I1018 09:41:31.357656  331569 kubeadm.go:322] CGROUPS_IO: enabled
	I1018 09:41:31.357759  331569 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:41:31.357909  331569 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:41:31.358034  331569 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1018 09:41:31.358130  331569 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:41:31.359453  331569 out.go:204]   - Generating certificates and keys ...
	I1018 09:41:31.359564  331569 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1018 09:41:31.359648  331569 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1018 09:41:31.359737  331569 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:41:31.359815  331569 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:41:31.359911  331569 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:41:31.359967  331569 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1018 09:41:31.360019  331569 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1018 09:41:31.360179  331569 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost missing-upgrade-631894] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 09:41:31.360224  331569 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1018 09:41:31.360390  331569 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost missing-upgrade-631894] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 09:41:31.360468  331569 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:41:31.360529  331569 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:41:31.360591  331569 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1018 09:41:31.360672  331569 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:41:31.360742  331569 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:41:31.360861  331569 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:41:31.360942  331569 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:41:31.361035  331569 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:41:31.361104  331569 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:41:31.361155  331569 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:41:31.362950  331569 out.go:204]   - Booting up control plane ...
	I1018 09:41:31.363033  331569 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:41:31.363109  331569 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:41:31.363168  331569 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:41:31.363252  331569 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:41:31.363332  331569 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:41:31.363374  331569 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1018 09:41:31.363549  331569 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1018 09:41:31.363644  331569 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.502537 seconds
	I1018 09:41:31.363750  331569 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:41:31.363888  331569 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:41:31.363938  331569 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:41:31.364094  331569 kubeadm.go:322] [mark-control-plane] Marking the node missing-upgrade-631894 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:41:31.364146  331569 kubeadm.go:322] [bootstrap-token] Using token: ehousc.jzaxl23me8418t0u
	I1018 09:41:31.365239  331569 out.go:204]   - Configuring RBAC rules ...
	I1018 09:41:31.365368  331569 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:41:31.365437  331569 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:41:31.365599  331569 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:41:31.365773  331569 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:41:31.365952  331569 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:41:31.366061  331569 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:41:31.366191  331569 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:41:31.366232  331569 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1018 09:41:31.366297  331569 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1018 09:41:31.366302  331569 kubeadm.go:322] 
	I1018 09:41:31.366375  331569 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1018 09:41:31.366380  331569 kubeadm.go:322] 
	I1018 09:41:31.366471  331569 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1018 09:41:31.366475  331569 kubeadm.go:322] 
	I1018 09:41:31.366494  331569 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1018 09:41:31.366545  331569 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:41:31.366588  331569 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:41:31.366591  331569 kubeadm.go:322] 
	I1018 09:41:31.366633  331569 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1018 09:41:31.366636  331569 kubeadm.go:322] 
	I1018 09:41:31.366672  331569 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:41:31.366675  331569 kubeadm.go:322] 
	I1018 09:41:31.366715  331569 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1018 09:41:31.366778  331569 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:41:31.366883  331569 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:41:31.366892  331569 kubeadm.go:322] 
	I1018 09:41:31.367012  331569 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:41:31.367124  331569 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1018 09:41:31.367130  331569 kubeadm.go:322] 
	I1018 09:41:31.367246  331569 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ehousc.jzaxl23me8418t0u \
	I1018 09:41:31.367384  331569 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f \
	I1018 09:41:31.367401  331569 kubeadm.go:322] 	--control-plane 
	I1018 09:41:31.367404  331569 kubeadm.go:322] 
	I1018 09:41:31.367470  331569 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:41:31.367473  331569 kubeadm.go:322] 
	I1018 09:41:31.367540  331569 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ehousc.jzaxl23me8418t0u \
	I1018 09:41:31.367635  331569 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f 
	I1018 09:41:31.367667  331569 cni.go:84] Creating CNI manager for ""
	I1018 09:41:31.367673  331569 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:41:31.369639  331569 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1018 09:41:31.370722  331569 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:41:31.375570  331569 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1018 09:41:31.375581  331569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1018 09:41:31.394665  331569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:41:32.060742  331569 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:41:32.060814  331569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:41:32.060814  331569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=8220a6eb95f0a4d75f7f2d7b14cef975f050512d minikube.k8s.io/name=missing-upgrade-631894 minikube.k8s.io/updated_at=2025_10_18T09_41_32_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:41:32.069753  331569 ops.go:34] apiserver oom_adj: -16
	I1018 09:41:32.135847  331569 kubeadm.go:1081] duration metric: took 75.07922ms to wait for elevateKubeSystemPrivileges.
	I1018 09:41:32.154550  331569 kubeadm.go:406] StartCluster complete in 10.257621764s
	I1018 09:41:32.154589  331569 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:32.154676  331569 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:41:32.155928  331569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:32.156206  331569 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:41:32.156296  331569 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1018 09:41:32.156385  331569 addons.go:69] Setting storage-provisioner=true in profile "missing-upgrade-631894"
	I1018 09:41:32.156403  331569 addons.go:69] Setting default-storageclass=true in profile "missing-upgrade-631894"
	I1018 09:41:32.156404  331569 config.go:182] Loaded profile config "missing-upgrade-631894": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1018 09:41:32.156411  331569 addons.go:231] Setting addon storage-provisioner=true in "missing-upgrade-631894"
	I1018 09:41:32.156419  331569 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "missing-upgrade-631894"
	I1018 09:41:32.156470  331569 host.go:66] Checking if "missing-upgrade-631894" exists ...
	I1018 09:41:32.156802  331569 cli_runner.go:164] Run: docker container inspect missing-upgrade-631894 --format={{.State.Status}}
	I1018 09:41:32.156978  331569 cli_runner.go:164] Run: docker container inspect missing-upgrade-631894 --format={{.State.Status}}
	I1018 09:41:32.182583  331569 addons.go:231] Setting addon default-storageclass=true in "missing-upgrade-631894"
	I1018 09:41:32.182631  331569 host.go:66] Checking if "missing-upgrade-631894" exists ...
	I1018 09:41:32.183140  331569 cli_runner.go:164] Run: docker container inspect missing-upgrade-631894 --format={{.State.Status}}
	I1018 09:41:32.185450  331569 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:41:32.186153  331569 kapi.go:248] "coredns" deployment in "kube-system" namespace and "missing-upgrade-631894" context rescaled to 1 replicas
	I1018 09:41:32.186573  331569 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:41:32.186593  331569 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:41:32.186604  331569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:41:32.187696  331569 out.go:177] * Verifying Kubernetes components...
	I1018 09:41:27.675224  332699 cli_runner.go:164] Run: docker network inspect cert-expiration-650496 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:41:27.692406  332699 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 09:41:27.696648  332699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:41:27.707710  332699 kubeadm.go:883] updating cluster {Name:cert-expiration-650496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-650496 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:41:27.707905  332699 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:41:27.707966  332699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:41:27.749014  332699 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:41:27.749030  332699 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:41:27.749090  332699 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:41:27.787414  332699 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:41:27.787430  332699 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:41:27.787438  332699 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 09:41:27.787564  332699 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-650496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-650496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:41:27.787653  332699 ssh_runner.go:195] Run: crio config
	I1018 09:41:27.841977  332699 cni.go:84] Creating CNI manager for ""
	I1018 09:41:27.841996  332699 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:41:27.842016  332699 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:41:27.842043  332699 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-650496 NodeName:cert-expiration-650496 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:41:27.842193  332699 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-650496"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:41:27.842258  332699 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:41:27.850927  332699 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:41:27.850990  332699 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:41:27.859940  332699 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1018 09:41:27.881585  332699 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:41:27.898839  332699 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1018 09:41:27.930485  332699 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:41:27.938532  332699 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:41:27.962305  332699 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:41:28.062529  332699 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:41:28.108337  332699 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496 for IP: 192.168.103.2
	I1018 09:41:28.108350  332699 certs.go:195] generating shared ca certs ...
	I1018 09:41:28.108370  332699 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:28.108525  332699 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:41:28.108576  332699 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:41:28.108585  332699 certs.go:257] generating profile certs ...
	I1018 09:41:28.108644  332699 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/client.key
	I1018 09:41:28.108662  332699 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/client.crt with IP's: []
	I1018 09:41:28.436441  332699 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/client.crt ...
	I1018 09:41:28.436459  332699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/client.crt: {Name:mka41d5a8c5180ef43755c2753eca367d5b30da0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:28.436651  332699 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/client.key ...
	I1018 09:41:28.436663  332699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/client.key: {Name:mk5973fa0ec4d3fc5dd5b89c40340b74358b4b10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:28.436776  332699 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.key.300147a9
	I1018 09:41:28.436790  332699 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.crt.300147a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1018 09:41:28.631348  332699 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.crt.300147a9 ...
	I1018 09:41:28.631365  332699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.crt.300147a9: {Name:mkf5d9fd0696a98c125f4850eb0e8369a5f0bc2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:28.631508  332699 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.key.300147a9 ...
	I1018 09:41:28.631534  332699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.key.300147a9: {Name:mka8eddd6f3f67c7a1eb0ed33729d4354a53fdf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:28.631604  332699 certs.go:382] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.crt.300147a9 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.crt
	I1018 09:41:28.631692  332699 certs.go:386] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.key.300147a9 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.key
	I1018 09:41:28.631746  332699 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/proxy-client.key
	I1018 09:41:28.631757  332699 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/proxy-client.crt with IP's: []
	I1018 09:41:28.936317  332699 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/proxy-client.crt ...
	I1018 09:41:28.936341  332699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/proxy-client.crt: {Name:mkecf2edf6790a0618b4e0abcc90392c08484139 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:28.936548  332699 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/proxy-client.key ...
	I1018 09:41:28.936561  332699 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/proxy-client.key: {Name:mkdc7347ef5384c82bf439f5d935082cebfec1c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:28.936838  332699 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:41:28.936885  332699 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:41:28.936895  332699 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:41:28.936926  332699 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:41:28.936955  332699 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:41:28.936983  332699 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:41:28.937039  332699 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:41:28.938067  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:41:28.959093  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:41:28.977705  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:41:28.995555  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:41:29.014164  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 09:41:29.032453  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:41:29.052576  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:41:29.074895  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:41:29.095200  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:41:29.118617  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:41:29.142088  332699 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:41:29.164730  332699 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:41:29.181757  332699 ssh_runner.go:195] Run: openssl version
	I1018 09:41:29.189654  332699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:41:29.200357  332699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:29.205525  332699 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:29.205578  332699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:29.261100  332699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:41:29.272722  332699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:41:29.282539  332699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:41:29.287407  332699 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:41:29.287453  332699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:41:29.343876  332699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:41:29.355205  332699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:41:29.366180  332699 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:41:29.371333  332699 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:41:29.371389  332699 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:41:29.421588  332699 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:41:29.437018  332699 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:41:29.441558  332699 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:41:29.441621  332699 kubeadm.go:400] StartCluster: {Name:cert-expiration-650496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-650496 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:41:29.441697  332699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:41:29.441752  332699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:41:29.474904  332699 cri.go:89] found id: ""
	I1018 09:41:29.474969  332699 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:41:29.485970  332699 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:41:29.496281  332699 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:41:29.496332  332699 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:41:29.506298  332699 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:41:29.506308  332699 kubeadm.go:157] found existing configuration files:
	
	I1018 09:41:29.506357  332699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:41:29.515459  332699 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:41:29.515512  332699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:41:29.524506  332699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:41:29.538939  332699 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:41:29.538986  332699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:41:29.549531  332699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:41:29.560229  332699 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:41:29.560278  332699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:41:29.569434  332699 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:41:29.579284  332699 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:41:29.579329  332699 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:41:29.589413  332699 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:41:29.674367  332699 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:41:29.750264  332699 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:41:30.456762  336575 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:41:30.456758  336575 addons.go:514] duration metric: took 3.22007ms for enable addons: enabled=[]
	I1018 09:41:30.579342  336575 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:41:30.594226  336575 node_ready.go:35] waiting up to 6m0s for node "pause-238319" to be "Ready" ...
	I1018 09:41:30.603306  336575 node_ready.go:49] node "pause-238319" is "Ready"
	I1018 09:41:30.603336  336575 node_ready.go:38] duration metric: took 9.062201ms for node "pause-238319" to be "Ready" ...
	I1018 09:41:30.603351  336575 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:41:30.603400  336575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:41:30.619528  336575 api_server.go:72] duration metric: took 166.027505ms to wait for apiserver process to appear ...
	I1018 09:41:30.619556  336575 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:41:30.619589  336575 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:41:30.625605  336575 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:41:30.626611  336575 api_server.go:141] control plane version: v1.34.1
	I1018 09:41:30.626648  336575 api_server.go:131] duration metric: took 7.072795ms to wait for apiserver health ...
	I1018 09:41:30.626660  336575 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:41:30.629758  336575 system_pods.go:59] 7 kube-system pods found
	I1018 09:41:30.629799  336575 system_pods.go:61] "coredns-66bc5c9577-lqmd8" [6dc5a3a9-be4f-4bf3-8cf8-a516bd3e4867] Running
	I1018 09:41:30.629810  336575 system_pods.go:61] "etcd-pause-238319" [efb9eb2e-4b92-4587-817e-27213d4814e7] Running
	I1018 09:41:30.629843  336575 system_pods.go:61] "kindnet-w8lp5" [3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed] Running
	I1018 09:41:30.629851  336575 system_pods.go:61] "kube-apiserver-pause-238319" [3635fda7-eccb-4928-a6a8-c8ccef65afff] Running
	I1018 09:41:30.629857  336575 system_pods.go:61] "kube-controller-manager-pause-238319" [ad3b8090-cb83-44cd-bb61-48729d3ad835] Running
	I1018 09:41:30.629867  336575 system_pods.go:61] "kube-proxy-769dd" [3b6484de-71d8-4a6c-93ba-2ae0eb18308b] Running
	I1018 09:41:30.629872  336575 system_pods.go:61] "kube-scheduler-pause-238319" [95cf08f1-1435-462f-b949-ad6a907e32c8] Running
	I1018 09:41:30.629882  336575 system_pods.go:74] duration metric: took 3.205101ms to wait for pod list to return data ...
	I1018 09:41:30.629897  336575 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:41:30.631716  336575 default_sa.go:45] found service account: "default"
	I1018 09:41:30.631736  336575 default_sa.go:55] duration metric: took 1.831893ms for default service account to be created ...
	I1018 09:41:30.631746  336575 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:41:30.634070  336575 system_pods.go:86] 7 kube-system pods found
	I1018 09:41:30.634091  336575 system_pods.go:89] "coredns-66bc5c9577-lqmd8" [6dc5a3a9-be4f-4bf3-8cf8-a516bd3e4867] Running
	I1018 09:41:30.634096  336575 system_pods.go:89] "etcd-pause-238319" [efb9eb2e-4b92-4587-817e-27213d4814e7] Running
	I1018 09:41:30.634099  336575 system_pods.go:89] "kindnet-w8lp5" [3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed] Running
	I1018 09:41:30.634103  336575 system_pods.go:89] "kube-apiserver-pause-238319" [3635fda7-eccb-4928-a6a8-c8ccef65afff] Running
	I1018 09:41:30.634106  336575 system_pods.go:89] "kube-controller-manager-pause-238319" [ad3b8090-cb83-44cd-bb61-48729d3ad835] Running
	I1018 09:41:30.634109  336575 system_pods.go:89] "kube-proxy-769dd" [3b6484de-71d8-4a6c-93ba-2ae0eb18308b] Running
	I1018 09:41:30.634112  336575 system_pods.go:89] "kube-scheduler-pause-238319" [95cf08f1-1435-462f-b949-ad6a907e32c8] Running
	I1018 09:41:30.634118  336575 system_pods.go:126] duration metric: took 2.366264ms to wait for k8s-apps to be running ...
	I1018 09:41:30.634126  336575 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:41:30.634166  336575 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:41:30.648104  336575 system_svc.go:56] duration metric: took 13.966014ms WaitForService to wait for kubelet
	I1018 09:41:30.648139  336575 kubeadm.go:586] duration metric: took 194.642708ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:41:30.648162  336575 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:41:30.651032  336575 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:41:30.651062  336575 node_conditions.go:123] node cpu capacity is 8
	I1018 09:41:30.651075  336575 node_conditions.go:105] duration metric: took 2.906823ms to run NodePressure ...
	I1018 09:41:30.651087  336575 start.go:241] waiting for startup goroutines ...
	I1018 09:41:30.651096  336575 start.go:246] waiting for cluster config update ...
	I1018 09:41:30.651105  336575 start.go:255] writing updated cluster config ...
	I1018 09:41:30.651445  336575 ssh_runner.go:195] Run: rm -f paused
	I1018 09:41:30.656249  336575 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:41:30.656914  336575 kapi.go:59] client config for pause-238319: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/client.crt", KeyFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/client.key", CAFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:41:30.659937  336575 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lqmd8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:30.665210  336575 pod_ready.go:94] pod "coredns-66bc5c9577-lqmd8" is "Ready"
	I1018 09:41:30.665236  336575 pod_ready.go:86] duration metric: took 5.276598ms for pod "coredns-66bc5c9577-lqmd8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:30.667358  336575 pod_ready.go:83] waiting for pod "etcd-pause-238319" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:30.671487  336575 pod_ready.go:94] pod "etcd-pause-238319" is "Ready"
	I1018 09:41:30.671513  336575 pod_ready.go:86] duration metric: took 4.13365ms for pod "etcd-pause-238319" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:30.673613  336575 pod_ready.go:83] waiting for pod "kube-apiserver-pause-238319" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:30.678250  336575 pod_ready.go:94] pod "kube-apiserver-pause-238319" is "Ready"
	I1018 09:41:30.678273  336575 pod_ready.go:86] duration metric: took 4.638952ms for pod "kube-apiserver-pause-238319" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:30.680468  336575 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-238319" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:31.060847  336575 pod_ready.go:94] pod "kube-controller-manager-pause-238319" is "Ready"
	I1018 09:41:31.060886  336575 pod_ready.go:86] duration metric: took 380.398443ms for pod "kube-controller-manager-pause-238319" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:31.260729  336575 pod_ready.go:83] waiting for pod "kube-proxy-769dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:31.661165  336575 pod_ready.go:94] pod "kube-proxy-769dd" is "Ready"
	I1018 09:41:31.661192  336575 pod_ready.go:86] duration metric: took 400.441287ms for pod "kube-proxy-769dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:31.860218  336575 pod_ready.go:83] waiting for pod "kube-scheduler-pause-238319" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:32.264198  336575 pod_ready.go:94] pod "kube-scheduler-pause-238319" is "Ready"
	I1018 09:41:32.264229  336575 pod_ready.go:86] duration metric: took 403.983061ms for pod "kube-scheduler-pause-238319" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:41:32.264242  336575 pod_ready.go:40] duration metric: took 1.607951424s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:41:32.336115  336575 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:41:32.338754  336575 out.go:179] * Done! kubectl is now configured to use "pause-238319" cluster and "default" namespace by default
	I1018 09:41:32.186688  331569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631894
	I1018 09:41:32.188968  331569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:41:32.209606  331569 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:41:32.209618  331569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:41:32.209664  331569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-631894
	I1018 09:41:32.212846  331569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/missing-upgrade-631894/id_rsa Username:docker}
	I1018 09:41:32.230150  331569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/missing-upgrade-631894/id_rsa Username:docker}
	I1018 09:41:32.250207  331569 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:41:32.251400  331569 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:41:32.251448  331569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:41:32.334081  331569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:41:32.347880  331569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:41:32.572864  331569 start.go:926] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1018 09:41:32.572947  331569 api_server.go:72] duration metric: took 386.341227ms to wait for apiserver process to appear ...
	I1018 09:41:32.572965  331569 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:41:32.572983  331569 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 09:41:32.579561  331569 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1018 09:41:32.581093  331569 api_server.go:141] control plane version: v1.28.3
	I1018 09:41:32.581112  331569 api_server.go:131] duration metric: took 8.140467ms to wait for apiserver health ...
	I1018 09:41:32.581122  331569 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:41:32.590510  331569 system_pods.go:59] 4 kube-system pods found
	I1018 09:41:32.590546  331569 system_pods.go:61] "etcd-missing-upgrade-631894" [e53c74aa-9bce-412b-aac0-8da7140f834d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:41:32.590557  331569 system_pods.go:61] "kube-apiserver-missing-upgrade-631894" [8f3c1359-aff2-40e3-98b9-d07436bc79ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:41:32.590568  331569 system_pods.go:61] "kube-controller-manager-missing-upgrade-631894" [ea8ae884-a7f3-4663-a05b-e7171359d550] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:41:32.590578  331569 system_pods.go:61] "kube-scheduler-missing-upgrade-631894" [e52bac3d-821f-4972-a84c-f9b558213a4a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:41:32.590585  331569 system_pods.go:74] duration metric: took 9.457267ms to wait for pod list to return data ...
	I1018 09:41:32.590597  331569 kubeadm.go:581] duration metric: took 403.995233ms to wait for : map[apiserver:true system_pods:true] ...
	I1018 09:41:32.590611  331569 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:41:32.598794  331569 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:41:32.598807  331569 node_conditions.go:123] node cpu capacity is 8
	I1018 09:41:32.598819  331569 node_conditions.go:105] duration metric: took 8.203823ms to run NodePressure ...
	I1018 09:41:32.598844  331569 start.go:228] waiting for startup goroutines ...
	I1018 09:41:32.805981  331569 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:41:30.571575  335228 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668 for IP: 192.168.85.2
	I1018 09:41:30.571599  335228 certs.go:195] generating shared ca certs ...
	I1018 09:41:30.571621  335228 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:30.571788  335228 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:41:30.571879  335228 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:41:30.571897  335228 certs.go:257] generating profile certs ...
	I1018 09:41:30.571972  335228 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/client.key
	I1018 09:41:30.571996  335228 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/client.crt with IP's: []
	I1018 09:41:30.701009  335228 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/client.crt ...
	I1018 09:41:30.701041  335228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/client.crt: {Name:mk6bfb4f0817ac3fa3d50a7e4151da3d6430608a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:30.701262  335228 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/client.key ...
	I1018 09:41:30.701286  335228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/client.key: {Name:mke2f6da2972b68b9d2a4fb4b67a395a35c5409d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:30.701415  335228 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.key.4fbec894
	I1018 09:41:30.701440  335228 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.crt.4fbec894 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1018 09:41:31.423018  335228 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.crt.4fbec894 ...
	I1018 09:41:31.423055  335228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.crt.4fbec894: {Name:mkd4aaf21ba135ffa62b6eb85fc66b04757b0486 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:31.423268  335228 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.key.4fbec894 ...
	I1018 09:41:31.423291  335228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.key.4fbec894: {Name:mk9cbab455dd004db12d7ec9e2e45f615cbfb732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:31.423428  335228 certs.go:382] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.crt.4fbec894 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.crt
	I1018 09:41:31.423540  335228 certs.go:386] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.key.4fbec894 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.key
	I1018 09:41:31.423625  335228 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.key
	I1018 09:41:31.423651  335228 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.crt with IP's: []
	I1018 09:41:31.926675  335228 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.crt ...
	I1018 09:41:31.926704  335228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.crt: {Name:mkced8c1daff26edd02db359e819db628030f328 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:31.926899  335228 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.key ...
	I1018 09:41:31.926920  335228 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.key: {Name:mk7e8de9d64656c38a7bb0c2c877583b42a915c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:41:31.927044  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1018 09:41:31.927070  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1018 09:41:31.927085  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1018 09:41:31.927105  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1018 09:41:31.927126  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1018 09:41:31.927142  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1018 09:41:31.927160  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1018 09:41:31.927178  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1018 09:41:31.927240  335228 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:41:31.927285  335228 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:41:31.927299  335228 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:41:31.927332  335228 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:41:31.927363  335228 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:41:31.927395  335228 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:41:31.927451  335228 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:41:31.927491  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:31.927512  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem -> /usr/share/ca-certificates/134611.pem
	I1018 09:41:31.927527  335228 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> /usr/share/ca-certificates/1346112.pem
	I1018 09:41:31.928166  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:41:31.950185  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:41:31.971569  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:41:31.989006  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:41:32.008136  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1018 09:41:32.027124  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:41:32.048603  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:41:32.069743  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/force-systemd-flag-565668/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:41:32.089326  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:41:32.109238  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:41:32.129714  335228 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:41:32.152095  335228 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:41:32.169655  335228 ssh_runner.go:195] Run: openssl version
	I1018 09:41:32.181601  335228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:41:32.195104  335228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:32.201152  335228 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:32.201216  335228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:41:32.255069  335228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:41:32.271629  335228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:41:32.288946  335228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:41:32.296789  335228 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:41:32.296970  335228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:41:32.355468  335228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:41:32.374321  335228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:41:32.387785  335228 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:41:32.393003  335228 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:41:32.393112  335228 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:41:32.449061  335228 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:41:32.463070  335228 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:41:32.469240  335228 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:41:32.469304  335228 kubeadm.go:400] StartCluster: {Name:force-systemd-flag-565668 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-565668 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:41:32.469384  335228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:41:32.469445  335228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:41:32.521592  335228 cri.go:89] found id: ""
	I1018 09:41:32.521669  335228 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:41:32.540879  335228 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:41:32.557549  335228 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:41:32.557653  335228 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:41:32.570169  335228 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:41:32.570229  335228 kubeadm.go:157] found existing configuration files:
	
	I1018 09:41:32.570292  335228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:41:32.579524  335228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:41:32.579639  335228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:41:32.592894  335228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:41:32.603287  335228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:41:32.603347  335228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:41:32.613600  335228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:41:32.622710  335228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:41:32.622764  335228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:41:32.631706  335228 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:41:32.640860  335228 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:41:32.640927  335228 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:41:32.650036  335228 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:41:32.693455  335228 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:41:32.693529  335228 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:41:32.718522  335228 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:41:32.718630  335228 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:41:32.718689  335228 kubeadm.go:318] OS: Linux
	I1018 09:41:32.718753  335228 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:41:32.718816  335228 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:41:32.718928  335228 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:41:32.719031  335228 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:41:32.719115  335228 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:41:32.719163  335228 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:41:32.719203  335228 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:41:32.719239  335228 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 09:41:32.808176  335228 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:41:32.808322  335228 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:41:32.808434  335228 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:41:32.819273  335228 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:41:32.807049  331569 addons.go:502] enable addons completed in 650.752442ms: enabled=[storage-provisioner default-storageclass]
	I1018 09:41:32.807084  331569 start.go:233] waiting for cluster config update ...
	I1018 09:41:32.807105  331569 start.go:242] writing updated cluster config ...
	I1018 09:41:32.807377  331569 ssh_runner.go:195] Run: rm -f paused
	I1018 09:41:32.864495  331569 start.go:600] kubectl: 1.34.1, cluster: 1.28.3 (minor skew: 6)
	I1018 09:41:32.865898  331569 out.go:177] 
	W1018 09:41:32.867343  331569 out.go:239] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.3.
	I1018 09:41:32.868759  331569 out.go:177]   - Want kubectl v1.28.3? Try 'minikube kubectl -- get pods -A'
	I1018 09:41:32.870376  331569 out.go:177] * Done! kubectl is now configured to use "missing-upgrade-631894" cluster and "default" namespace by default
	I1018 09:41:32.821779  335228 out.go:252]   - Generating certificates and keys ...
	I1018 09:41:32.821916  335228 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:41:32.822007  335228 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:41:33.017392  335228 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:41:33.072490  335228 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:41:33.193781  335228 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:41:33.384548  335228 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:41:33.514123  335228 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:41:33.514329  335228 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-565668 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 09:41:34.147789  335228 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:41:34.147987  335228 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-565668 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1018 09:41:34.267083  335228 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:41:34.594744  335228 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:41:34.797884  335228 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:41:34.797992  335228 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:41:34.969680  335228 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:41:35.250008  335228 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:41:35.499175  335228 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:41:35.691375  335228 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:41:35.924593  335228 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:41:35.925464  335228 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:41:35.932515  335228 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.052597133Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.053518702Z" level=info msg="Conmon does support the --sync option"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.053535533Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.05354967Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.054469947Z" level=info msg="Conmon does support the --sync option"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.05449043Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.059162289Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.059187671Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.059914425Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = true\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/c
ni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/
var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.060328783Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.060368533Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.066912565Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.114194339Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-lqmd8 Namespace:kube-system ID:3d6907308702e966e0f74bce0fdf6191620f32d933b84ad08ed8b2357f29db60 UID:6dc5a3a9-be4f-4bf3-8cf8-a516bd3e4867 NetNS:/var/run/netns/2e6accb6-c824-4780-9538-6a9f11d29b7d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0000ca780}] Aliases:map[]}"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.1144422Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-lqmd8 for CNI network kindnet (type=ptp)"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115084921Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115123111Z" level=info msg="Starting seccomp notifier watcher"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115192175Z" level=info msg="Create NRI interface"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115402784Z" level=info msg="built-in NRI default validator is disabled"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115423512Z" level=info msg="runtime interface created"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115438026Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115445974Z" level=info msg="runtime interface starting up..."
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115453588Z" level=info msg="starting plugins..."
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.115469015Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 18 09:41:29 pause-238319 crio[2172]: time="2025-10-18T09:41:29.116166011Z" level=info msg="No systemd watchdog enabled"
	Oct 18 09:41:29 pause-238319 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	cb50c5561b8a2       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 seconds ago      Running             coredns                   0                   3d6907308702e       coredns-66bc5c9577-lqmd8               kube-system
	fc0bb0d4fc4e6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   27 seconds ago      Running             kindnet-cni               0                   44d7361e6f355       kindnet-w8lp5                          kube-system
	a019d95fa3490       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   27 seconds ago      Running             kube-proxy                0                   9d162f4b827b8       kube-proxy-769dd                       kube-system
	8074b8d8db125       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   37 seconds ago      Running             kube-apiserver            0                   ed2b2de72f9b7       kube-apiserver-pause-238319            kube-system
	ab8c2763e1457       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   37 seconds ago      Running             etcd                      0                   5e0f9fd1573dd       etcd-pause-238319                      kube-system
	45e57534f6b2e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   37 seconds ago      Running             kube-scheduler            0                   6d15ad88e83db       kube-scheduler-pause-238319            kube-system
	be1d9fd168ccb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   37 seconds ago      Running             kube-controller-manager   0                   95ab681df50c5       kube-controller-manager-pause-238319   kube-system
	
	
	==> coredns [cb50c5561b8a28159de66c6e421c73e788439a08f5d404a6e136e4b904504795] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44329 - 19230 "HINFO IN 2235917094381022108.6888995318603568073. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024023698s
	
	
	==> describe nodes <==
	Name:               pause-238319
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-238319
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=pause-238319
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_41_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:41:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-238319
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:41:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:41:21 +0000   Sat, 18 Oct 2025 09:41:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:41:21 +0000   Sat, 18 Oct 2025 09:41:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:41:21 +0000   Sat, 18 Oct 2025 09:41:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:41:21 +0000   Sat, 18 Oct 2025 09:41:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-238319
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                8fb9a7a0-8858-4074-8948-817d47122c80
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-lqmd8                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-pause-238319                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-w8lp5                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-pause-238319             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-pause-238319    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-769dd                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-pause-238319             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node pause-238319 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node pause-238319 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node pause-238319 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node pause-238319 event: Registered Node pause-238319 in Controller
	  Normal  NodeReady                17s   kubelet          Node pause-238319 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [ab8c2763e1457e03e4fe2887dc9efa7d087b973ed3f586c84497a6659af10703] <==
	{"level":"warn","ts":"2025-10-18T09:41:01.270719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:41:01.285059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:41:01.290042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:41:01.305890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:41:01.364977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51230","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:41:10.295314Z","caller":"traceutil/trace.go:172","msg":"trace[461286718] transaction","detail":"{read_only:false; response_revision:338; number_of_response:1; }","duration":"157.16995ms","start":"2025-10-18T09:41:10.138122Z","end":"2025-10-18T09:41:10.295292Z","steps":["trace[461286718] 'process raft request'  (duration: 124.546043ms)","trace[461286718] 'compare'  (duration: 32.495696ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:41:10.295355Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.157904ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:693"}
	{"level":"info","ts":"2025-10-18T09:41:10.295436Z","caller":"traceutil/trace.go:172","msg":"trace[312135317] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:337; }","duration":"131.283172ms","start":"2025-10-18T09:41:10.164135Z","end":"2025-10-18T09:41:10.295418Z","steps":["trace[312135317] 'agreement among raft nodes before linearized reading'  (duration: 98.505306ms)","trace[312135317] 'range keys from in-memory index tree'  (duration: 32.524198ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:41:10.296491Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.862666ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2025-10-18T09:41:10.296544Z","caller":"traceutil/trace.go:172","msg":"trace[747627744] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:338; }","duration":"114.923378ms","start":"2025-10-18T09:41:10.181609Z","end":"2025-10-18T09:41:10.296533Z","steps":["trace[747627744] 'agreement among raft nodes before linearized reading'  (duration: 114.77565ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:41:10.296536Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.859947ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kube-proxy\" limit:1 ","response":"range_response_count:1 size:185"}
	{"level":"info","ts":"2025-10-18T09:41:10.296576Z","caller":"traceutil/trace.go:172","msg":"trace[1044351635] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kube-proxy; range_end:; response_count:1; response_revision:338; }","duration":"114.91131ms","start":"2025-10-18T09:41:10.181656Z","end":"2025-10-18T09:41:10.296568Z","steps":["trace[1044351635] 'agreement among raft nodes before linearized reading'  (duration: 114.793838ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:41:10.296812Z","caller":"traceutil/trace.go:172","msg":"trace[62056727] transaction","detail":"{read_only:false; response_revision:339; number_of_response:1; }","duration":"156.61289ms","start":"2025-10-18T09:41:10.140170Z","end":"2025-10-18T09:41:10.296783Z","steps":["trace[62056727] 'process raft request'  (duration: 156.247996ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:41:10.296810Z","caller":"traceutil/trace.go:172","msg":"trace[1498315554] transaction","detail":"{read_only:false; response_revision:341; number_of_response:1; }","duration":"152.019008ms","start":"2025-10-18T09:41:10.144781Z","end":"2025-10-18T09:41:10.296800Z","steps":["trace[1498315554] 'process raft request'  (duration: 151.980287ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:41:10.296859Z","caller":"traceutil/trace.go:172","msg":"trace[1338658320] transaction","detail":"{read_only:false; response_revision:340; number_of_response:1; }","duration":"156.490837ms","start":"2025-10-18T09:41:10.140363Z","end":"2025-10-18T09:41:10.296853Z","steps":["trace[1338658320] 'process raft request'  (duration: 156.358893ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:41:13.676368Z","caller":"traceutil/trace.go:172","msg":"trace[380464295] transaction","detail":"{read_only:false; response_revision:374; number_of_response:1; }","duration":"128.328728ms","start":"2025-10-18T09:41:13.548021Z","end":"2025-10-18T09:41:13.676349Z","steps":["trace[380464295] 'process raft request'  (duration: 124.269729ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:41:14.947911Z","caller":"traceutil/trace.go:172","msg":"trace[368124397] transaction","detail":"{read_only:false; response_revision:376; number_of_response:1; }","duration":"156.598008ms","start":"2025-10-18T09:41:14.791297Z","end":"2025-10-18T09:41:14.947895Z","steps":["trace[368124397] 'process raft request'  (duration: 156.397132ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:41:18.990224Z","caller":"traceutil/trace.go:172","msg":"trace[1176147796] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"164.789475ms","start":"2025-10-18T09:41:18.825412Z","end":"2025-10-18T09:41:18.990202Z","steps":["trace[1176147796] 'process raft request'  (duration: 164.610616ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:41:19.143206Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.431541ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-238319\" limit:1 ","response":"range_response_count:1 size:5582"}
	{"level":"info","ts":"2025-10-18T09:41:19.143784Z","caller":"traceutil/trace.go:172","msg":"trace[1140840071] range","detail":"{range_begin:/registry/minions/pause-238319; range_end:; response_count:1; response_revision:379; }","duration":"108.021043ms","start":"2025-10-18T09:41:19.035722Z","end":"2025-10-18T09:41:19.143743Z","steps":["trace[1140840071] 'agreement among raft nodes before linearized reading'  (duration: 40.436224ms)","trace[1140840071] 'range keys from in-memory index tree'  (duration: 66.922084ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:41:19.145841Z","caller":"traceutil/trace.go:172","msg":"trace[329876065] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"146.869319ms","start":"2025-10-18T09:41:18.998927Z","end":"2025-10-18T09:41:19.145796Z","steps":["trace[329876065] 'process raft request'  (duration: 77.288627ms)","trace[329876065] 'compare'  (duration: 66.891713ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:41:19.550090Z","caller":"traceutil/trace.go:172","msg":"trace[778341082] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"261.30654ms","start":"2025-10-18T09:41:19.288767Z","end":"2025-10-18T09:41:19.550073Z","steps":["trace[778341082] 'process raft request'  (duration: 261.202289ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:41:19.707062Z","caller":"traceutil/trace.go:172","msg":"trace[2121977494] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"148.007042ms","start":"2025-10-18T09:41:19.559033Z","end":"2025-10-18T09:41:19.707040Z","steps":["trace[2121977494] 'process raft request'  (duration: 147.860496ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:41:25.155012Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"262.575037ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T09:41:25.155090Z","caller":"traceutil/trace.go:172","msg":"trace[683017480] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:403; }","duration":"262.675155ms","start":"2025-10-18T09:41:24.892398Z","end":"2025-10-18T09:41:25.155073Z","steps":["trace[683017480] 'range keys from in-memory index tree'  (duration: 262.527696ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:41:38 up  1:24,  0 user,  load average: 5.50, 2.61, 1.48
	Linux pause-238319 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fc0bb0d4fc4e65de13e98f313b9612efea7c1345f5805feb1a4a92b098766fe9] <==
	I1018 09:41:10.643338       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:41:10.643733       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:41:10.643923       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:41:10.643945       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:41:10.643971       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:41:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:41:10.918784       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:41:10.918836       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:41:10.918851       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:41:10.919025       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:41:11.319563       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:41:11.319601       1 metrics.go:72] Registering metrics
	I1018 09:41:11.319659       1 controller.go:711] "Syncing nftables rules"
	I1018 09:41:20.919459       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:41:20.919514       1 main.go:301] handling current node
	I1018 09:41:30.923810       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:41:30.923880       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8074b8d8db125c1763e6634b51d91476f5f4341b4314ab0d19ee17e915081239] <==
	I1018 09:41:01.912260       1 policy_source.go:240] refreshing policies
	E1018 09:41:01.951459       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1018 09:41:01.998898       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:41:02.004409       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:41:02.004981       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 09:41:02.014500       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:41:02.014888       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:41:02.103526       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:41:02.801676       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:41:02.805841       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:41:02.805918       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:41:03.379415       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:41:03.422560       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:41:03.507037       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:41:03.516053       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 09:41:03.517192       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:41:03.522732       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:41:03.897925       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:41:04.491793       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:41:04.503709       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:41:04.510882       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:41:09.550679       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:41:09.705400       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:41:09.710868       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:41:10.003404       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [be1d9fd168ccbc88a1ae4984ccaa690a5a284e7713511e3376e39c36c372f903] <==
	I1018 09:41:08.896910       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:41:08.896990       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:41:08.897000       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:41:08.897032       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:41:08.897203       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:41:08.897216       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 09:41:08.897233       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:41:08.897633       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:41:08.897688       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:41:08.897725       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:41:08.897870       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:41:08.898115       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:41:08.899312       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 09:41:08.899342       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 09:41:08.901603       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:41:08.906851       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:41:08.907889       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:41:08.924189       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:41:08.930537       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:41:08.936783       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:41:08.945857       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:41:08.948039       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:41:08.948054       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:41:08.948060       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:41:23.850644       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a019d95fa3490ac09c3720d197c684b3dd34f3ddd19c0eed1e63fe48cccfb2df] <==
	I1018 09:41:10.475562       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:41:10.548153       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:41:10.649434       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:41:10.649496       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:41:10.649749       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:41:10.675464       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:41:10.675542       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:41:10.682656       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:41:10.683243       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:41:10.683267       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:41:10.685085       1 config.go:200] "Starting service config controller"
	I1018 09:41:10.685618       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:41:10.685460       1 config.go:309] "Starting node config controller"
	I1018 09:41:10.685698       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:41:10.685705       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:41:10.685479       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:41:10.685714       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:41:10.685476       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:41:10.685727       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:41:10.786008       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:41:10.786049       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:41:10.786107       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [45e57534f6b2ee505cd92bbf9caca937d90fab1e893258907dd9b9d5c454863e] <==
	E1018 09:41:01.871418       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:41:01.871536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:41:01.871546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:41:01.871602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:41:01.871229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:41:01.872750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:41:01.872783       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:41:01.872862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:41:01.872970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:41:01.873032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:41:01.873101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:41:01.873101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:41:02.700846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:41:02.706843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:41:02.743623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:41:02.785320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:41:02.896152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:41:02.931654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:41:02.972258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:41:02.977905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:41:03.113987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:41:03.134159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:41:03.188556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:41:03.244790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1018 09:41:05.664600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:41:05 pause-238319 kubelet[1332]: I1018 09:41:05.424236    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-238319" podStartSLOduration=1.424212968 podStartE2EDuration="1.424212968s" podCreationTimestamp="2025-10-18 09:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:41:05.409929844 +0000 UTC m=+1.155030491" watchObservedRunningTime="2025-10-18 09:41:05.424212968 +0000 UTC m=+1.169313615"
	Oct 18 09:41:05 pause-238319 kubelet[1332]: I1018 09:41:05.424396    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-238319" podStartSLOduration=1.424386297 podStartE2EDuration="1.424386297s" podCreationTimestamp="2025-10-18 09:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:41:05.424097698 +0000 UTC m=+1.169198344" watchObservedRunningTime="2025-10-18 09:41:05.424386297 +0000 UTC m=+1.169486937"
	Oct 18 09:41:05 pause-238319 kubelet[1332]: I1018 09:41:05.443413    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-238319" podStartSLOduration=1.443372179 podStartE2EDuration="1.443372179s" podCreationTimestamp="2025-10-18 09:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:41:05.443218855 +0000 UTC m=+1.188319501" watchObservedRunningTime="2025-10-18 09:41:05.443372179 +0000 UTC m=+1.188472823"
	Oct 18 09:41:05 pause-238319 kubelet[1332]: I1018 09:41:05.443666    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-238319" podStartSLOduration=1.4436529089999999 podStartE2EDuration="1.443652909s" podCreationTimestamp="2025-10-18 09:41:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:41:05.433524355 +0000 UTC m=+1.178625015" watchObservedRunningTime="2025-10-18 09:41:05.443652909 +0000 UTC m=+1.188753555"
	Oct 18 09:41:08 pause-238319 kubelet[1332]: I1018 09:41:08.896056    1332 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 09:41:08 pause-238319 kubelet[1332]: I1018 09:41:08.897351    1332 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 09:41:10 pause-238319 kubelet[1332]: I1018 09:41:10.077639    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr9cc\" (UniqueName: \"kubernetes.io/projected/3b6484de-71d8-4a6c-93ba-2ae0eb18308b-kube-api-access-zr9cc\") pod \"kube-proxy-769dd\" (UID: \"3b6484de-71d8-4a6c-93ba-2ae0eb18308b\") " pod="kube-system/kube-proxy-769dd"
	Oct 18 09:41:10 pause-238319 kubelet[1332]: I1018 09:41:10.077701    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed-xtables-lock\") pod \"kindnet-w8lp5\" (UID: \"3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed\") " pod="kube-system/kindnet-w8lp5"
	Oct 18 09:41:10 pause-238319 kubelet[1332]: I1018 09:41:10.077727    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzxg8\" (UniqueName: \"kubernetes.io/projected/3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed-kube-api-access-bzxg8\") pod \"kindnet-w8lp5\" (UID: \"3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed\") " pod="kube-system/kindnet-w8lp5"
	Oct 18 09:41:10 pause-238319 kubelet[1332]: I1018 09:41:10.078477    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3b6484de-71d8-4a6c-93ba-2ae0eb18308b-kube-proxy\") pod \"kube-proxy-769dd\" (UID: \"3b6484de-71d8-4a6c-93ba-2ae0eb18308b\") " pod="kube-system/kube-proxy-769dd"
	Oct 18 09:41:10 pause-238319 kubelet[1332]: I1018 09:41:10.078583    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed-lib-modules\") pod \"kindnet-w8lp5\" (UID: \"3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed\") " pod="kube-system/kindnet-w8lp5"
	Oct 18 09:41:10 pause-238319 kubelet[1332]: I1018 09:41:10.078620    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b6484de-71d8-4a6c-93ba-2ae0eb18308b-xtables-lock\") pod \"kube-proxy-769dd\" (UID: \"3b6484de-71d8-4a6c-93ba-2ae0eb18308b\") " pod="kube-system/kube-proxy-769dd"
	Oct 18 09:41:10 pause-238319 kubelet[1332]: I1018 09:41:10.078663    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b6484de-71d8-4a6c-93ba-2ae0eb18308b-lib-modules\") pod \"kube-proxy-769dd\" (UID: \"3b6484de-71d8-4a6c-93ba-2ae0eb18308b\") " pod="kube-system/kube-proxy-769dd"
	Oct 18 09:41:10 pause-238319 kubelet[1332]: I1018 09:41:10.078687    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed-cni-cfg\") pod \"kindnet-w8lp5\" (UID: \"3412dae0-a55d-4f1f-8ff5-f50cf1cc51ed\") " pod="kube-system/kindnet-w8lp5"
	Oct 18 09:41:11 pause-238319 kubelet[1332]: I1018 09:41:11.436969    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-769dd" podStartSLOduration=1.436945891 podStartE2EDuration="1.436945891s" podCreationTimestamp="2025-10-18 09:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:41:11.436329114 +0000 UTC m=+7.181429772" watchObservedRunningTime="2025-10-18 09:41:11.436945891 +0000 UTC m=+7.182046537"
	Oct 18 09:41:11 pause-238319 kubelet[1332]: I1018 09:41:11.437104    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-w8lp5" podStartSLOduration=1.437092484 podStartE2EDuration="1.437092484s" podCreationTimestamp="2025-10-18 09:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:41:11.426018995 +0000 UTC m=+7.171119641" watchObservedRunningTime="2025-10-18 09:41:11.437092484 +0000 UTC m=+7.182193130"
	Oct 18 09:41:21 pause-238319 kubelet[1332]: I1018 09:41:21.013218    1332 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 09:41:21 pause-238319 kubelet[1332]: I1018 09:41:21.161614    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvrwg\" (UniqueName: \"kubernetes.io/projected/6dc5a3a9-be4f-4bf3-8cf8-a516bd3e4867-kube-api-access-rvrwg\") pod \"coredns-66bc5c9577-lqmd8\" (UID: \"6dc5a3a9-be4f-4bf3-8cf8-a516bd3e4867\") " pod="kube-system/coredns-66bc5c9577-lqmd8"
	Oct 18 09:41:21 pause-238319 kubelet[1332]: I1018 09:41:21.161672    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6dc5a3a9-be4f-4bf3-8cf8-a516bd3e4867-config-volume\") pod \"coredns-66bc5c9577-lqmd8\" (UID: \"6dc5a3a9-be4f-4bf3-8cf8-a516bd3e4867\") " pod="kube-system/coredns-66bc5c9577-lqmd8"
	Oct 18 09:41:21 pause-238319 kubelet[1332]: I1018 09:41:21.450645    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lqmd8" podStartSLOduration=11.450618886 podStartE2EDuration="11.450618886s" podCreationTimestamp="2025-10-18 09:41:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:41:21.450191894 +0000 UTC m=+17.195292540" watchObservedRunningTime="2025-10-18 09:41:21.450618886 +0000 UTC m=+17.195719533"
	Oct 18 09:41:29 pause-238319 kubelet[1332]: E1018 09:41:29.379234    1332 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"
	Oct 18 09:41:32 pause-238319 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:41:32 pause-238319 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:41:32 pause-238319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:41:32 pause-238319 systemd[1]: kubelet.service: Consumed 1.244s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-238319 -n pause-238319
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-238319 -n pause-238319: exit status 2 (364.124678ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-238319 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-619885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-619885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (227.401266ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:43:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-619885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-619885 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-619885 describe deploy/metrics-server -n kube-system: exit status 1 (58.357596ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-619885 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-619885
helpers_test.go:243: (dbg) docker inspect old-k8s-version-619885:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191",
	        "Created": "2025-10-18T09:42:17.27822051Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 353820,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:42:17.314019893Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191/hosts",
	        "LogPath": "/var/lib/docker/containers/1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191/1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191-json.log",
	        "Name": "/old-k8s-version-619885",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-619885:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-619885",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191",
	                "LowerDir": "/var/lib/docker/overlay2/e897cdf36c8a11bd13de0e7fe8917a83f0cd612a7600cb4b1650393d6766bf33-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e897cdf36c8a11bd13de0e7fe8917a83f0cd612a7600cb4b1650393d6766bf33/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e897cdf36c8a11bd13de0e7fe8917a83f0cd612a7600cb4b1650393d6766bf33/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e897cdf36c8a11bd13de0e7fe8917a83f0cd612a7600cb4b1650393d6766bf33/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-619885",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-619885/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-619885",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-619885",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-619885",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "421616cfd2a7f439a0b3c23cf49b5949a83e01d5161e2b18899e7405d9ee3688",
	            "SandboxKey": "/var/run/docker/netns/421616cfd2a7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33181"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33182"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33185"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33183"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33184"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-619885": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:b7:ff:f3:5f:59",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f172a0295669142d53ec5906c89946014e1c53fe54e9e8bba2fffa329bff8586",
	                    "EndpointID": "aa38ffe1873608b0d022724aa1d36bda66fee35818e958fa956450070cd22e3f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-619885",
	                        "1ed6b6e47d49"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-619885 -n old-k8s-version-619885
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-619885 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-619885 logs -n 25: (1.032944962s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-345705 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo crio config                                                                                                                                                                                                             │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ delete  │ -p cilium-345705                                                                                                                                                                                                                              │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p cert-expiration-650496 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-650496    │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ delete  │ -p running-upgrade-896586                                                                                                                                                                                                                     │ running-upgrade-896586    │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p force-systemd-flag-565668 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-565668 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p pause-238319 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-238319              │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ pause   │ -p pause-238319 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-238319              │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ delete  │ -p pause-238319                                                                                                                                                                                                                               │ pause-238319              │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p cert-options-310417 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:42 UTC │
	│ start   │ -p missing-upgrade-631894 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-631894    │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:42 UTC │
	│ ssh     │ force-systemd-flag-565668 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-565668 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ delete  │ -p force-systemd-flag-565668                                                                                                                                                                                                                  │ force-systemd-flag-565668 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:42 UTC │
	│ ssh     │ cert-options-310417 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ ssh     │ -p cert-options-310417 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ delete  │ -p cert-options-310417                                                                                                                                                                                                                        │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ stop    │ -p kubernetes-upgrade-919613                                                                                                                                                                                                                  │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ start   │ -p old-k8s-version-619885 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │                     │
	│ delete  │ -p missing-upgrade-631894                                                                                                                                                                                                                     │ missing-upgrade-631894    │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ start   │ -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-619885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:42:24
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:42:24.595022  356384 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:42:24.595321  356384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:42:24.595335  356384 out.go:374] Setting ErrFile to fd 2...
	I1018 09:42:24.595342  356384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:42:24.595686  356384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:42:24.596226  356384 out.go:368] Setting JSON to false
	I1018 09:42:24.597306  356384 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5089,"bootTime":1760775456,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:42:24.597409  356384 start.go:141] virtualization: kvm guest
	I1018 09:42:24.599457  356384 out.go:179] * [no-preload-589869] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:42:24.600502  356384 notify.go:220] Checking for updates...
	I1018 09:42:24.600680  356384 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:42:24.602226  356384 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:42:24.603392  356384 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:42:24.607059  356384 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:42:24.608262  356384 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:42:24.609402  356384 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:42:24.610779  356384 config.go:182] Loaded profile config "cert-expiration-650496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:42:24.610912  356384 config.go:182] Loaded profile config "kubernetes-upgrade-919613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:42:24.611008  356384 config.go:182] Loaded profile config "old-k8s-version-619885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:42:24.611092  356384 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:42:24.638131  356384 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:42:24.638294  356384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:42:24.702675  356384 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 09:42:24.691965323 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:42:24.702843  356384 docker.go:318] overlay module found
	I1018 09:42:24.704777  356384 out.go:179] * Using the docker driver based on user configuration
	I1018 09:42:24.705981  356384 start.go:305] selected driver: docker
	I1018 09:42:24.705998  356384 start.go:925] validating driver "docker" against <nil>
	I1018 09:42:24.706011  356384 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:42:24.706561  356384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:42:24.768569  356384 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 09:42:24.757198453 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:42:24.768763  356384 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:42:24.769068  356384 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:42:24.771255  356384 out.go:179] * Using Docker driver with root privileges
	I1018 09:42:24.773054  356384 cni.go:84] Creating CNI manager for ""
	I1018 09:42:24.773130  356384 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:42:24.773142  356384 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:42:24.773215  356384 start.go:349] cluster config:
	{Name:no-preload-589869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:42:24.774434  356384 out.go:179] * Starting "no-preload-589869" primary control-plane node in "no-preload-589869" cluster
	I1018 09:42:24.775575  356384 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:42:24.776636  356384 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:42:24.777631  356384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:42:24.777674  356384 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:42:24.777787  356384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/config.json ...
	I1018 09:42:24.777863  356384 cache.go:107] acquiring lock: {Name:mk8d380524b774b5edadec7411def9ea12a01591 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.777866  356384 cache.go:107] acquiring lock: {Name:mka90deb6de3b7e19386c6d0f0785fc3e96d2e63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.777956  356384 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 09:42:24.777968  356384 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 117.174µs
	I1018 09:42:24.777984  356384 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 09:42:24.777950  356384 cache.go:107] acquiring lock: {Name:mk9ad0aa9744bfc6007683a43233309af99e2ada Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.778000  356384 cache.go:107] acquiring lock: {Name:mk2f4cf60554cd9991205940f1aa9911f9bb383a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.777992  356384 cache.go:107] acquiring lock: {Name:mk3d292d197011122be585423e2f701ad4e9ea53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.778027  356384 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:24.778028  356384 cache.go:107] acquiring lock: {Name:mka2dd49281e4623d770ed33d958b114b7cc789f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.778122  356384 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:24.778150  356384 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1018 09:42:24.778148  356384 cache.go:107] acquiring lock: {Name:mk61b8919142cd1b35d71e72ba258fc114b79f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.778199  356384 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:24.778245  356384 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:24.778333  356384 cache.go:107] acquiring lock: {Name:mka49eac321c9a155354693a3a6be91b02cdc4a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.778365  356384 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:24.778408  356384 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:24.777859  356384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/config.json: {Name:mk65166fc402595ea5b7b4ecb3249b12bd86a17d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:24.779855  356384 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:24.779936  356384 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1018 09:42:24.779861  356384 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:24.779856  356384 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:24.779895  356384 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:24.779996  356384 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:24.780150  356384 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:24.805746  356384 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:42:24.805771  356384 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:42:24.805792  356384 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:42:24.805832  356384 start.go:360] acquireMachinesLock for no-preload-589869: {Name:mk63da8322dd3ab3d8f833b8b716fde137314571 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.805944  356384 start.go:364] duration metric: took 89.937µs to acquireMachinesLock for "no-preload-589869"
	I1018 09:42:24.805973  356384 start.go:93] Provisioning new machine with config: &{Name:no-preload-589869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:42:24.806072  356384 start.go:125] createHost starting for "" (driver="docker")
	I1018 09:42:23.593525  352186 cli_runner.go:164] Run: docker network inspect old-k8s-version-619885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:42:23.610580  352186 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:42:23.614757  352186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:42:23.682896  352186 kubeadm.go:883] updating cluster {Name:old-k8s-version-619885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-619885 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:42:23.683025  352186 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 09:42:23.683108  352186 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:42:23.824896  352186 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:42:23.824922  352186 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:42:23.824990  352186 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:42:23.853315  352186 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:42:23.853335  352186 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:42:23.853344  352186 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1018 09:42:23.853454  352186 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-619885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-619885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:42:23.853537  352186 ssh_runner.go:195] Run: crio config
	I1018 09:42:23.909299  352186 cni.go:84] Creating CNI manager for ""
	I1018 09:42:23.909324  352186 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:42:23.909345  352186 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:42:23.909420  352186 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-619885 NodeName:old-k8s-version-619885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:42:23.909575  352186 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-619885"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:42:23.909641  352186 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1018 09:42:23.920088  352186 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:42:23.920152  352186 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:42:23.929625  352186 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1018 09:42:23.951016  352186 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:42:23.970921  352186 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1018 09:42:23.983549  352186 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:42:23.987586  352186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:42:23.997774  352186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:42:24.115707  352186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:42:24.137573  352186 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885 for IP: 192.168.76.2
	I1018 09:42:24.137598  352186 certs.go:195] generating shared ca certs ...
	I1018 09:42:24.137633  352186 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:24.137797  352186 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:42:24.137868  352186 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:42:24.137883  352186 certs.go:257] generating profile certs ...
	I1018 09:42:24.137952  352186 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.key
	I1018 09:42:24.137977  352186 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.crt with IP's: []
	I1018 09:42:24.654726  352186 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.crt ...
	I1018 09:42:24.654763  352186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.crt: {Name:mkbedca19eb398c9621a3ec385979fbd97e31283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:24.655003  352186 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.key ...
	I1018 09:42:24.655030  352186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.key: {Name:mkb17d76dd188c4bceebac6fb7f8c290bd94c55b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:24.655188  352186 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.key.1eba7d00
	I1018 09:42:24.655219  352186 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.crt.1eba7d00 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 09:42:25.167779  352186 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.crt.1eba7d00 ...
	I1018 09:42:25.167812  352186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.crt.1eba7d00: {Name:mk612dc3760272fed390af6cd5dfff2a120b4b79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:25.168020  352186 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.key.1eba7d00 ...
	I1018 09:42:25.168044  352186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.key.1eba7d00: {Name:mk266f10cb7773d7ca7e765ec90aef469fd27911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:25.168173  352186 certs.go:382] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.crt.1eba7d00 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.crt
	I1018 09:42:25.168260  352186 certs.go:386] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.key.1eba7d00 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.key
	I1018 09:42:25.168332  352186 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/proxy-client.key
	I1018 09:42:25.168348  352186 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/proxy-client.crt with IP's: []
	I1018 09:42:25.921619  352186 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/proxy-client.crt ...
	I1018 09:42:25.921660  352186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/proxy-client.crt: {Name:mkeeb24b84c62fb5014c9d501ad16ca2bd32e80e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:25.921870  352186 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/proxy-client.key ...
	I1018 09:42:25.921893  352186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/proxy-client.key: {Name:mk8b9c293f16d81aac5acbfffecc4f1758fa20f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:25.922139  352186 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:42:25.922190  352186 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:42:25.922203  352186 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:42:25.922240  352186 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:42:25.922270  352186 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:42:25.922298  352186 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:42:25.922357  352186 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:42:25.923277  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:42:25.949610  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:42:25.969756  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:42:25.987945  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:42:26.011975  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 09:42:26.036783  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:42:26.059055  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:42:26.080933  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:42:26.107087  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:42:26.134959  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:42:26.155611  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:42:26.174912  352186 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:42:26.191449  352186 ssh_runner.go:195] Run: openssl version
	I1018 09:42:26.198158  352186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:42:26.207664  352186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:26.212197  352186 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:26.212253  352186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:26.255247  352186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:42:26.265431  352186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:42:26.274898  352186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:42:26.279026  352186 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:42:26.279080  352186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:42:26.329439  352186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:42:26.341244  352186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:42:26.352695  352186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:42:26.358603  352186 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:42:26.358668  352186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:42:26.412688  352186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:42:26.426086  352186 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:42:26.433668  352186 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:42:26.433740  352186 kubeadm.go:400] StartCluster: {Name:old-k8s-version-619885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-619885 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:42:26.433939  352186 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:42:26.434040  352186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:42:26.476108  352186 cri.go:89] found id: ""
	I1018 09:42:26.476178  352186 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:42:26.487710  352186 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:42:26.501523  352186 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:42:26.501750  352186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:42:26.512206  352186 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:42:26.512240  352186 kubeadm.go:157] found existing configuration files:
	
	I1018 09:42:26.512294  352186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:42:26.522227  352186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:42:26.522299  352186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:42:26.535560  352186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:42:26.546231  352186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:42:26.546303  352186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:42:26.556582  352186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:42:26.566420  352186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:42:26.566568  352186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:42:26.576682  352186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:42:26.588704  352186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:42:26.588784  352186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:42:26.606910  352186 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:42:26.664947  352186 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1018 09:42:26.665040  352186 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:42:26.706851  352186 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:42:26.706978  352186 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:42:26.707063  352186 kubeadm.go:318] OS: Linux
	I1018 09:42:26.707119  352186 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:42:26.707194  352186 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:42:26.707259  352186 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:42:26.707328  352186 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:42:26.707398  352186 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:42:26.707474  352186 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:42:26.707543  352186 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:42:26.707613  352186 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 09:42:26.790376  352186 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:42:26.790547  352186 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:42:26.790721  352186 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1018 09:42:26.952985  352186 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:42:26.957054  352186 out.go:252]   - Generating certificates and keys ...
	I1018 09:42:26.957187  352186 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:42:26.957283  352186 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:42:27.194045  352186 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:42:27.433910  352186 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:42:23.682810  353123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:42:23.827784  353123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:42:23.852490  353123 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613 for IP: 192.168.85.2
	I1018 09:42:23.852519  353123 certs.go:195] generating shared ca certs ...
	I1018 09:42:23.852542  353123 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:23.852714  353123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:42:23.852789  353123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:42:23.852806  353123 certs.go:257] generating profile certs ...
	I1018 09:42:23.852928  353123 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/client.key
	I1018 09:42:23.852988  353123 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/apiserver.key.354dbbd0
	I1018 09:42:23.853041  353123 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/proxy-client.key
	I1018 09:42:23.853191  353123 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:42:23.853232  353123 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:42:23.853244  353123 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:42:23.853275  353123 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:42:23.853308  353123 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:42:23.853337  353123 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:42:23.853385  353123 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:42:23.854238  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:42:23.874296  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:42:23.895842  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:42:23.917288  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:42:23.940345  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1018 09:42:23.965071  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:42:23.983867  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:42:24.002341  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:42:24.020110  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:42:24.040569  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:42:24.061544  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:42:24.081141  353123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:42:24.094264  353123 ssh_runner.go:195] Run: openssl version
	I1018 09:42:24.100476  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:42:24.109172  353123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:42:24.112963  353123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:42:24.113025  353123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:42:24.151850  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:42:24.161775  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:42:24.170994  353123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:42:24.175873  353123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:42:24.175933  353123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:42:24.214129  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:42:24.222588  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:42:24.231387  353123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:24.235136  353123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:24.235189  353123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:24.271566  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:42:24.280426  353123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:42:24.284550  353123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:42:24.320880  353123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:42:24.362416  353123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:42:24.410172  353123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:42:24.450102  353123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:42:24.490540  353123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:42:24.531245  353123 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-919613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-919613 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:42:24.531326  353123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:42:24.531370  353123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:42:24.564249  353123 cri.go:89] found id: ""
	I1018 09:42:24.564313  353123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:42:24.573002  353123 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:42:24.573021  353123 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:42:24.573069  353123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:42:24.581734  353123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:42:24.582469  353123 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-919613" does not appear in /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:42:24.582868  353123 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-131066/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-919613" cluster setting kubeconfig missing "kubernetes-upgrade-919613" context setting]
	I1018 09:42:24.583575  353123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:24.584355  353123 kapi.go:59] client config for kubernetes-upgrade-919613: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/client.crt", KeyFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/client.key", CAFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:42:24.584955  353123 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 09:42:24.584980  353123 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 09:42:24.584988  353123 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 09:42:24.584995  353123 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 09:42:24.585000  353123 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 09:42:24.585476  353123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:42:24.594382  353123 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-18 09:41:59.549628879 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-18 09:42:23.583569144 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-919613"
	   kubeletExtraArgs:
	-    node-ip: 192.168.85.2
	+    - name: "node-ip"
	+      value: "192.168.85.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.34.1
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1018 09:42:24.594400  353123 kubeadm.go:1160] stopping kube-system containers ...
	I1018 09:42:24.594411  353123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1018 09:42:24.594459  353123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:42:24.626345  353123 cri.go:89] found id: ""
	I1018 09:42:24.626418  353123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1018 09:42:24.663957  353123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:42:24.674070  353123 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Oct 18 09:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Oct 18 09:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Oct 18 09:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct 18 09:42 /etc/kubernetes/scheduler.conf
	
	I1018 09:42:24.674188  353123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:42:24.684460  353123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:42:24.693207  353123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:42:24.702074  353123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:42:24.702139  353123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:42:24.710072  353123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:42:24.718230  353123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:42:24.718285  353123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:42:24.728366  353123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:42:24.740347  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:42:24.794858  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:42:26.934659  353123 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.139755661s)
	I1018 09:42:26.934735  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:42:27.109382  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:42:27.163381  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:42:27.222857  353123 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:42:27.222927  353123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:42:27.723958  353123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:42:27.739035  353123 api_server.go:72] duration metric: took 516.184716ms to wait for apiserver process to appear ...
	I1018 09:42:27.739066  353123 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:42:27.739088  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:27.739456  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:42:28.239989  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:27.608860  352186 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:42:27.751380  352186 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:42:27.972019  352186 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:42:27.972221  352186 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-619885] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:42:28.173330  352186 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:42:28.173543  352186 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-619885] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:42:28.308805  352186 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:42:28.438420  352186 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:42:28.907675  352186 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:42:28.907758  352186 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:42:28.974488  352186 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:42:29.128735  352186 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:42:29.281752  352186 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:42:29.460756  352186 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:42:29.461515  352186 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:42:29.465451  352186 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:42:24.812105  356384 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 09:42:24.812328  356384 start.go:159] libmachine.API.Create for "no-preload-589869" (driver="docker")
	I1018 09:42:24.812358  356384 client.go:168] LocalClient.Create starting
	I1018 09:42:24.812443  356384 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem
	I1018 09:42:24.812482  356384 main.go:141] libmachine: Decoding PEM data...
	I1018 09:42:24.812502  356384 main.go:141] libmachine: Parsing certificate...
	I1018 09:42:24.812564  356384 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem
	I1018 09:42:24.812595  356384 main.go:141] libmachine: Decoding PEM data...
	I1018 09:42:24.812607  356384 main.go:141] libmachine: Parsing certificate...
	I1018 09:42:24.813055  356384 cli_runner.go:164] Run: docker network inspect no-preload-589869 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:42:24.836406  356384 cli_runner.go:211] docker network inspect no-preload-589869 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:42:24.836479  356384 network_create.go:284] running [docker network inspect no-preload-589869] to gather additional debugging logs...
	I1018 09:42:24.836495  356384 cli_runner.go:164] Run: docker network inspect no-preload-589869
	W1018 09:42:24.857225  356384 cli_runner.go:211] docker network inspect no-preload-589869 returned with exit code 1
	I1018 09:42:24.857252  356384 network_create.go:287] error running [docker network inspect no-preload-589869]: docker network inspect no-preload-589869: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-589869 not found
	I1018 09:42:24.857263  356384 network_create.go:289] output of [docker network inspect no-preload-589869]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-589869 not found
	
	** /stderr **
	I1018 09:42:24.857351  356384 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:42:24.876525  356384 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-feaba2b58bf0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:21:45:2e:39:fd} reservation:<nil>}
	I1018 09:42:24.877044  356384 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-159189dc4cae IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:66:7d:df:93:8a:95} reservation:<nil>}
	I1018 09:42:24.877417  356384 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-34d26817ecdb IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:18:1e:90:91:d0} reservation:<nil>}
	I1018 09:42:24.878137  356384 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f172a0295669 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:54:85:1e:fa:a0} reservation:<nil>}
	I1018 09:42:24.878599  356384 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-de47eb429c53 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ea:6f:ec:e2:71:9d} reservation:<nil>}
	I1018 09:42:24.879221  356384 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e8c5a0}
	I1018 09:42:24.879243  356384 network_create.go:124] attempt to create docker network no-preload-589869 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1018 09:42:24.879295  356384 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-589869 no-preload-589869
	I1018 09:42:24.950286  356384 network_create.go:108] docker network no-preload-589869 192.168.94.0/24 created
	I1018 09:42:24.950315  356384 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-589869" container
	I1018 09:42:24.950366  356384 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:42:24.968918  356384 cli_runner.go:164] Run: docker volume create no-preload-589869 --label name.minikube.sigs.k8s.io=no-preload-589869 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:42:24.971754  356384 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1018 09:42:24.977857  356384 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1018 09:42:24.983434  356384 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1018 09:42:24.988577  356384 oci.go:103] Successfully created a docker volume no-preload-589869
	I1018 09:42:24.988645  356384 cli_runner.go:164] Run: docker run --rm --name no-preload-589869-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-589869 --entrypoint /usr/bin/test -v no-preload-589869:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:42:25.009313  356384 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1018 09:42:25.013125  356384 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1018 09:42:25.017201  356384 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1018 09:42:25.068339  356384 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1018 09:42:25.093224  356384 cache.go:157] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1018 09:42:25.093248  356384 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 315.348718ms
	I1018 09:42:25.093259  356384 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 09:42:25.436785  356384 cache.go:157] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 09:42:25.436810  356384 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 658.480781ms
	I1018 09:42:25.436836  356384 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 09:42:25.440048  356384 oci.go:107] Successfully prepared a docker volume no-preload-589869
	I1018 09:42:25.440085  356384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1018 09:42:25.440168  356384 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 09:42:25.440216  356384 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 09:42:25.440265  356384 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:42:25.501191  356384 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-589869 --name no-preload-589869 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-589869 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-589869 --network no-preload-589869 --ip 192.168.94.2 --volume no-preload-589869:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:42:25.785549  356384 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Running}}
	I1018 09:42:25.806603  356384 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:42:25.826306  356384 cli_runner.go:164] Run: docker exec no-preload-589869 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:42:25.874703  356384 oci.go:144] the created container "no-preload-589869" has a running status.
	I1018 09:42:25.874738  356384 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa...
	I1018 09:42:25.935372  356384 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:42:25.963682  356384 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:42:25.982275  356384 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:42:25.982299  356384 kic_runner.go:114] Args: [docker exec --privileged no-preload-589869 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:42:26.035990  356384 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:42:26.057306  356384 machine.go:93] provisionDockerMachine start ...
	I1018 09:42:26.057453  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:26.079047  356384 main.go:141] libmachine: Using SSH client type: native
	I1018 09:42:26.079424  356384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1018 09:42:26.079448  356384 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:42:26.080190  356384 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45294->127.0.0.1:33186: read: connection reset by peer
	I1018 09:42:26.454630  356384 cache.go:157] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 09:42:26.454659  356384 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.676514025s
	I1018 09:42:26.454674  356384 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 09:42:26.488188  356384 cache.go:157] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 09:42:26.488220  356384 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.710237667s
	I1018 09:42:26.488239  356384 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 09:42:26.602646  356384 cache.go:157] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 09:42:26.602680  356384 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.824823422s
	I1018 09:42:26.602698  356384 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 09:42:26.674620  356384 cache.go:157] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 09:42:26.674652  356384 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.896652242s
	I1018 09:42:26.674668  356384 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 09:42:26.964271  356384 cache.go:157] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 09:42:26.964303  356384 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.186368093s
	I1018 09:42:26.964318  356384 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 09:42:26.964345  356384 cache.go:87] Successfully saved all images to host disk.
	I1018 09:42:29.214021  356384 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-589869
	
	I1018 09:42:29.214051  356384 ubuntu.go:182] provisioning hostname "no-preload-589869"
	I1018 09:42:29.214113  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:29.232562  356384 main.go:141] libmachine: Using SSH client type: native
	I1018 09:42:29.232783  356384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1018 09:42:29.232797  356384 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-589869 && echo "no-preload-589869" | sudo tee /etc/hostname
	I1018 09:42:29.375720  356384 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-589869
	
	I1018 09:42:29.375810  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:29.395319  356384 main.go:141] libmachine: Using SSH client type: native
	I1018 09:42:29.395594  356384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1018 09:42:29.395624  356384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-589869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-589869/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-589869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:42:29.532349  356384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:42:29.532381  356384 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:42:29.532403  356384 ubuntu.go:190] setting up certificates
	I1018 09:42:29.532414  356384 provision.go:84] configureAuth start
	I1018 09:42:29.532470  356384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-589869
	I1018 09:42:29.550411  356384 provision.go:143] copyHostCerts
	I1018 09:42:29.550472  356384 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:42:29.550483  356384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:42:29.550555  356384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:42:29.550688  356384 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:42:29.550701  356384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:42:29.550744  356384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:42:29.550851  356384 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:42:29.550873  356384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:42:29.550912  356384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:42:29.551008  356384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.no-preload-589869 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-589869]
	I1018 09:42:29.707126  356384 provision.go:177] copyRemoteCerts
	I1018 09:42:29.707186  356384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:42:29.707230  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:29.726095  356384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:42:29.823612  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:42:29.843271  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:42:29.861915  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:42:29.880312  356384 provision.go:87] duration metric: took 347.878604ms to configureAuth
	I1018 09:42:29.880343  356384 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:42:29.880536  356384 config.go:182] Loaded profile config "no-preload-589869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:42:29.880662  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:29.899262  356384 main.go:141] libmachine: Using SSH client type: native
	I1018 09:42:29.899477  356384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1018 09:42:29.899494  356384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:42:30.148636  356384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:42:30.148660  356384 machine.go:96] duration metric: took 4.091329665s to provisionDockerMachine
	I1018 09:42:30.148674  356384 client.go:171] duration metric: took 5.336305888s to LocalClient.Create
	I1018 09:42:30.148700  356384 start.go:167] duration metric: took 5.336372221s to libmachine.API.Create "no-preload-589869"
	I1018 09:42:30.148710  356384 start.go:293] postStartSetup for "no-preload-589869" (driver="docker")
	I1018 09:42:30.148733  356384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:42:30.148800  356384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:42:30.148876  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:30.167062  356384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:42:30.266004  356384 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:42:30.269578  356384 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:42:30.269613  356384 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:42:30.269625  356384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:42:30.269681  356384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:42:30.269867  356384 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:42:30.270008  356384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:42:30.278467  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:42:30.298168  356384 start.go:296] duration metric: took 149.439969ms for postStartSetup
	I1018 09:42:30.298558  356384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-589869
	I1018 09:42:30.316704  356384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/config.json ...
	I1018 09:42:30.317019  356384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:42:30.317075  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:30.335598  356384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:42:30.429171  356384 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:42:30.433788  356384 start.go:128] duration metric: took 5.627697899s to createHost
	I1018 09:42:30.433818  356384 start.go:83] releasing machines lock for "no-preload-589869", held for 5.627859528s
	I1018 09:42:30.433914  356384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-589869
	I1018 09:42:30.452187  356384 ssh_runner.go:195] Run: cat /version.json
	I1018 09:42:30.452243  356384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:42:30.452259  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:30.452323  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:30.470920  356384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:42:30.471681  356384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:42:30.626905  356384 ssh_runner.go:195] Run: systemctl --version
	I1018 09:42:30.633801  356384 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:42:30.671875  356384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:42:30.678001  356384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:42:30.678087  356384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:42:30.713128  356384 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:42:30.713152  356384 start.go:495] detecting cgroup driver to use...
	I1018 09:42:30.713187  356384 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:42:30.713237  356384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:42:30.735929  356384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:42:30.751039  356384 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:42:30.751102  356384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:42:30.769425  356384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:42:30.788446  356384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:42:30.884031  356384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:42:30.988551  356384 docker.go:234] disabling docker service ...
	I1018 09:42:30.988630  356384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:42:31.009026  356384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:42:31.021429  356384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:42:31.116203  356384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:42:31.230784  356384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:42:31.244090  356384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:42:31.258165  356384 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:42:31.258227  356384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:42:31.268244  356384 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:42:31.268301  356384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:42:31.277016  356384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:42:31.285477  356384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:42:31.294193  356384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:42:31.302330  356384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:42:31.310881  356384 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:42:31.323723  356384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:42:31.332397  356384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:42:31.339816  356384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:42:31.347092  356384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:42:31.431256  356384 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:42:31.550445  356384 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:42:31.550516  356384 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:42:31.554909  356384 start.go:563] Will wait 60s for crictl version
	I1018 09:42:31.554981  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:31.558477  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:42:31.583597  356384 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:42:31.583672  356384 ssh_runner.go:195] Run: crio --version
	I1018 09:42:31.613008  356384 ssh_runner.go:195] Run: crio --version
	I1018 09:42:31.648752  356384 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:42:29.468712  352186 out.go:252]   - Booting up control plane ...
	I1018 09:42:29.468884  352186 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:42:29.469005  352186 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:42:29.469098  352186 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:42:29.482809  352186 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:42:29.484245  352186 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:42:29.484328  352186 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:42:29.579665  352186 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1018 09:42:33.243899  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1018 09:42:33.243975  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:31.650056  356384 cli_runner.go:164] Run: docker network inspect no-preload-589869 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:42:31.668381  356384 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 09:42:31.672817  356384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:42:31.683221  356384 kubeadm.go:883] updating cluster {Name:no-preload-589869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:42:31.683341  356384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:42:31.683385  356384 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:42:31.710541  356384 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1018 09:42:31.710570  356384 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1018 09:42:31.710665  356384 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:31.710678  356384 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:31.710688  356384 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:42:31.710701  356384 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:31.710721  356384 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:31.710732  356384 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:31.710791  356384 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1018 09:42:31.710726  356384 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:31.712031  356384 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:31.712045  356384 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:42:31.712049  356384 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1018 09:42:31.712141  356384 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:31.712143  356384 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:31.712173  356384 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:31.712195  356384 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:31.712208  356384 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:31.846342  356384 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:31.857672  356384 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:31.859209  356384 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:31.871427  356384 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:31.887066  356384 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:31.891007  356384 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1018 09:42:31.891060  356384 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:31.891108  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:31.891723  356384 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:31.905118  356384 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1018 09:42:31.905188  356384 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:31.905245  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:31.911216  356384 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1018 09:42:31.954062  356384 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1018 09:42:31.954110  356384 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:31.954162  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:31.954203  356384 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1018 09:42:31.954161  356384 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1018 09:42:31.954236  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:31.954247  356384 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:31.954255  356384 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:31.954283  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:31.954289  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:31.954317  356384 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1018 09:42:31.954338  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:31.954353  356384 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:31.954374  356384 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1018 09:42:31.954387  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:31.954410  356384 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1018 09:42:31.954441  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:31.959772  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:31.960495  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:31.991385  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:31.991452  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:31.991457  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:31.991507  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1018 09:42:31.991556  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:31.991729  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:31.992404  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:32.033921  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:32.035600  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1018 09:42:32.035635  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:32.035705  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:32.035731  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:32.035792  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:32.036574  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:32.074955  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:32.080819  356384 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1018 09:42:32.080955  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 09:42:32.081033  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1018 09:42:32.085481  356384 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1018 09:42:32.085560  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:32.085565  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1018 09:42:32.085682  356384 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1018 09:42:32.086030  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1018 09:42:32.091246  356384 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1018 09:42:32.091350  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1018 09:42:32.111713  356384 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1018 09:42:32.111815  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 09:42:32.111989  356384 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1018 09:42:32.112015  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1018 09:42:32.114381  356384 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1018 09:42:32.114411  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1018 09:42:32.116597  356384 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1018 09:42:32.116673  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1018 09:42:32.129743  356384 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1018 09:42:32.129774  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1018 09:42:32.129802  356384 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1018 09:42:32.129841  356384 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1018 09:42:32.129865  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1018 09:42:32.129840  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1018 09:42:32.129897  356384 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1018 09:42:32.130018  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 09:42:32.185416  356384 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1018 09:42:32.185453  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1018 09:42:32.242458  356384 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1018 09:42:32.242495  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1018 09:42:32.327527  356384 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1018 09:42:32.327626  356384 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1018 09:42:32.811159  356384 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1018 09:42:32.811207  356384 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1018 09:42:32.811262  356384 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1018 09:42:33.114320  356384 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:42:33.973525  356384 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.162230124s)
	I1018 09:42:33.973560  356384 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1018 09:42:33.973587  356384 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1018 09:42:33.973621  356384 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1018 09:42:33.973676  356384 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:42:33.973718  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:33.973638  356384 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1018 09:42:34.582012  352186 kubeadm.go:318] [apiclient] All control plane components are healthy after 5.002586 seconds
	I1018 09:42:34.582208  352186 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:42:34.596416  352186 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:42:35.119690  352186 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:42:35.120028  352186 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-619885 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:42:35.629531  352186 kubeadm.go:318] [bootstrap-token] Using token: 0j8grk.zmi3e1k9gtnd1hr8
	I1018 09:42:35.630798  352186 out.go:252]   - Configuring RBAC rules ...
	I1018 09:42:35.630944  352186 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:42:35.634722  352186 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:42:35.641023  352186 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:42:35.644910  352186 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:42:35.647895  352186 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:42:35.650610  352186 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:42:35.661437  352186 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:42:35.854286  352186 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:42:36.038662  352186 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:42:36.039719  352186 kubeadm.go:318] 
	I1018 09:42:36.039811  352186 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:42:36.039844  352186 kubeadm.go:318] 
	I1018 09:42:36.039969  352186 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:42:36.039991  352186 kubeadm.go:318] 
	I1018 09:42:36.040038  352186 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:42:36.040120  352186 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:42:36.040193  352186 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:42:36.040201  352186 kubeadm.go:318] 
	I1018 09:42:36.040242  352186 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:42:36.040249  352186 kubeadm.go:318] 
	I1018 09:42:36.040290  352186 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:42:36.040319  352186 kubeadm.go:318] 
	I1018 09:42:36.040405  352186 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:42:36.040511  352186 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:42:36.040607  352186 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:42:36.040622  352186 kubeadm.go:318] 
	I1018 09:42:36.040744  352186 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:42:36.040894  352186 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:42:36.040905  352186 kubeadm.go:318] 
	I1018 09:42:36.041033  352186 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 0j8grk.zmi3e1k9gtnd1hr8 \
	I1018 09:42:36.041189  352186 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f \
	I1018 09:42:36.041221  352186 kubeadm.go:318] 	--control-plane 
	I1018 09:42:36.041230  352186 kubeadm.go:318] 
	I1018 09:42:36.041355  352186 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:42:36.041362  352186 kubeadm.go:318] 
	I1018 09:42:36.041469  352186 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 0j8grk.zmi3e1k9gtnd1hr8 \
	I1018 09:42:36.041623  352186 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f 
	I1018 09:42:36.044234  352186 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:42:36.044411  352186 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:42:36.044445  352186 cni.go:84] Creating CNI manager for ""
	I1018 09:42:36.044459  352186 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:42:36.046763  352186 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 09:42:36.048153  352186 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:42:36.052735  352186 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1018 09:42:36.052756  352186 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:42:36.067130  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:42:36.778567  352186 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:42:36.778641  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:36.778774  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-619885 minikube.k8s.io/updated_at=2025_10_18T09_42_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=old-k8s-version-619885 minikube.k8s.io/primary=true
	I1018 09:42:36.850013  352186 ops.go:34] apiserver oom_adj: -16
	I1018 09:42:36.850273  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:37.351083  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:38.247728  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1018 09:42:38.247768  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:35.244924  356384 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.271170545s)
	I1018 09:42:35.244957  356384 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1018 09:42:35.244967  356384 ssh_runner.go:235] Completed: which crictl: (1.271232185s)
	I1018 09:42:35.244984  356384 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 09:42:35.245022  356384 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 09:42:35.245027  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:42:36.679295  356384 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.43424504s)
	I1018 09:42:36.679325  356384 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1018 09:42:36.679341  356384 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.43427421s)
	I1018 09:42:36.679415  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:42:36.679352  356384 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 09:42:36.679498  356384 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 09:42:37.830377  356384 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.15093625s)
	I1018 09:42:37.830452  356384 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.150925916s)
	I1018 09:42:37.830465  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:42:37.830481  356384 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1018 09:42:37.830517  356384 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 09:42:37.830563  356384 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 09:42:39.023076  356384 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.192471894s)
	I1018 09:42:39.023119  356384 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1018 09:42:39.023132  356384 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.192640454s)
	I1018 09:42:39.023153  356384 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1018 09:42:39.023177  356384 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1018 09:42:39.023216  356384 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1018 09:42:39.023261  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1018 09:42:37.850995  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:38.350835  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:38.851071  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:39.350399  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:39.850710  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:40.351032  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:40.851075  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:41.350751  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:41.850466  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:42.350760  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:43.251927  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1018 09:42:43.252003  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:42.446063  356384 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.422825096s)
	I1018 09:42:42.446088  356384 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1018 09:42:42.446158  356384 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.422878017s)
	I1018 09:42:42.446190  356384 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1018 09:42:42.446212  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1018 09:42:42.496029  356384 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1018 09:42:42.496082  356384 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1018 09:42:43.048016  356384 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1018 09:42:43.048058  356384 cache_images.go:124] Successfully loaded all cached images
	I1018 09:42:43.048063  356384 cache_images.go:93] duration metric: took 11.337478312s to LoadCachedImages
	I1018 09:42:43.048076  356384 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1018 09:42:43.048172  356384 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-589869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:42:43.048244  356384 ssh_runner.go:195] Run: crio config
	I1018 09:42:43.096290  356384 cni.go:84] Creating CNI manager for ""
	I1018 09:42:43.096312  356384 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:42:43.096331  356384 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:42:43.096353  356384 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-589869 NodeName:no-preload-589869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:42:43.096476  356384 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-589869"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:42:43.096544  356384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:42:43.105087  356384 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1018 09:42:43.105193  356384 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1018 09:42:43.113120  356384 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1018 09:42:43.113137  356384 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1018 09:42:43.113193  356384 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1018 09:42:43.113215  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1018 09:42:43.117293  356384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1018 09:42:43.117329  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1018 09:42:44.014669  356384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:42:44.028101  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1018 09:42:44.032143  356384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1018 09:42:44.032177  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1018 09:42:44.207087  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1018 09:42:44.211372  356384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1018 09:42:44.211401  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1018 09:42:44.384068  356384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:42:44.392671  356384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 09:42:44.407534  356384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:42:44.425058  356384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1018 09:42:44.438393  356384 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:42:44.442242  356384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:42:44.452223  356384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:42:44.531971  356384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:42:44.562070  356384 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869 for IP: 192.168.94.2
	I1018 09:42:44.562093  356384 certs.go:195] generating shared ca certs ...
	I1018 09:42:44.562115  356384 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:44.562270  356384 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:42:44.562313  356384 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:42:44.562324  356384 certs.go:257] generating profile certs ...
	I1018 09:42:44.562376  356384 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.key
	I1018 09:42:44.562389  356384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.crt with IP's: []
	I1018 09:42:42.850651  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:43.350691  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:43.850624  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:44.351073  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:44.851041  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:45.351034  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:45.851220  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:46.350726  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:46.850974  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:47.350558  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:48.255594  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1018 09:42:48.255649  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:48.627598  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:55126->192.168.85.2:8443: read: connection reset by peer
	I1018 09:42:47.850967  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:48.350614  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:48.850374  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:49.351009  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:49.425681  352186 kubeadm.go:1113] duration metric: took 12.647104717s to wait for elevateKubeSystemPrivileges
	I1018 09:42:49.425885  352186 kubeadm.go:402] duration metric: took 22.99214647s to StartCluster
	I1018 09:42:49.425916  352186 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:49.425979  352186 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:42:49.427564  352186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:49.427925  352186 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:42:49.428368  352186 config.go:182] Loaded profile config "old-k8s-version-619885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:42:49.428518  352186 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:42:49.428636  352186 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-619885"
	I1018 09:42:49.428660  352186 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-619885"
	I1018 09:42:49.428768  352186 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-619885"
	I1018 09:42:49.428666  352186 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-619885"
	I1018 09:42:49.428907  352186 host.go:66] Checking if "old-k8s-version-619885" exists ...
	I1018 09:42:49.429203  352186 cli_runner.go:164] Run: docker container inspect old-k8s-version-619885 --format={{.State.Status}}
	I1018 09:42:49.429436  352186 cli_runner.go:164] Run: docker container inspect old-k8s-version-619885 --format={{.State.Status}}
	I1018 09:42:49.428678  352186 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:42:49.433273  352186 out.go:179] * Verifying Kubernetes components...
	I1018 09:42:49.434601  352186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:42:49.455161  352186 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:42:44.736618  356384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.crt ...
	I1018 09:42:44.736644  356384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.crt: {Name:mk681b5eaf9c5bbd8adeb1d784233d192b938336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:44.736837  356384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.key ...
	I1018 09:42:44.736857  356384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.key: {Name:mk1c12e71185ce597c6dee95da15e4470786d675 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:44.736953  356384 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.key.3d5af95d
	I1018 09:42:44.736970  356384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.crt.3d5af95d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1018 09:42:45.083161  356384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.crt.3d5af95d ...
	I1018 09:42:45.083188  356384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.crt.3d5af95d: {Name:mk4a75e600fa90a034a8972d87463f87cb5b98a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:45.083343  356384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.key.3d5af95d ...
	I1018 09:42:45.083356  356384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.key.3d5af95d: {Name:mk0e1847f7003315b8d6824ad9a722525cb3c942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:45.083423  356384 certs.go:382] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.crt.3d5af95d -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.crt
	I1018 09:42:45.083497  356384 certs.go:386] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.key.3d5af95d -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.key
	I1018 09:42:45.083551  356384 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.key
	I1018 09:42:45.083577  356384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.crt with IP's: []
	I1018 09:42:45.157195  356384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.crt ...
	I1018 09:42:45.157221  356384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.crt: {Name:mk59913af5d0eab5bb4250a6620440f15595ef7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:45.157379  356384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.key ...
	I1018 09:42:45.157393  356384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.key: {Name:mk6421ddcf8217af18599b98b316a3f4bbbea80a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:45.157561  356384 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:42:45.157603  356384 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:42:45.157613  356384 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:42:45.157633  356384 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:42:45.157660  356384 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:42:45.157682  356384 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:42:45.157723  356384 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:42:45.158380  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:42:45.177690  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:42:45.195208  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:42:45.212733  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:42:45.230282  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:42:45.247450  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:42:45.264949  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:42:45.282007  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:42:45.299203  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:42:45.317947  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:42:45.335528  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:42:45.352682  356384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:42:45.365495  356384 ssh_runner.go:195] Run: openssl version
	I1018 09:42:45.372334  356384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:42:45.382688  356384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:45.386878  356384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:45.386953  356384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:45.431550  356384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:42:45.440658  356384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:42:45.449777  356384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:42:45.453849  356384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:42:45.453918  356384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:42:45.488493  356384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:42:45.497428  356384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:42:45.506224  356384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:42:45.510594  356384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:42:45.510650  356384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:42:45.546210  356384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:42:45.555223  356384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:42:45.559094  356384 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:42:45.559147  356384 kubeadm.go:400] StartCluster: {Name:no-preload-589869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:42:45.559216  356384 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:42:45.559256  356384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:42:45.587090  356384 cri.go:89] found id: ""
	I1018 09:42:45.587185  356384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:42:45.595435  356384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:42:45.603402  356384 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:42:45.603463  356384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:42:45.611282  356384 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:42:45.611307  356384 kubeadm.go:157] found existing configuration files:
	
	I1018 09:42:45.611361  356384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:42:45.618930  356384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:42:45.618987  356384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:42:45.626002  356384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:42:45.633781  356384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:42:45.633850  356384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:42:45.641390  356384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:42:45.649135  356384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:42:45.649183  356384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:42:45.656552  356384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:42:45.664632  356384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:42:45.664710  356384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:42:45.672639  356384 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:42:45.725790  356384 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:42:45.781811  356384 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:42:49.455898  352186 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-619885"
	I1018 09:42:49.455992  352186 host.go:66] Checking if "old-k8s-version-619885" exists ...
	I1018 09:42:49.456406  352186 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:42:49.456422  352186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:42:49.456433  352186 cli_runner.go:164] Run: docker container inspect old-k8s-version-619885 --format={{.State.Status}}
	I1018 09:42:49.456475  352186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-619885
	I1018 09:42:49.488279  352186 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:42:49.488306  352186 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:42:49.488398  352186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-619885
	I1018 09:42:49.488951  352186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/old-k8s-version-619885/id_rsa Username:docker}
	I1018 09:42:49.515370  352186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/old-k8s-version-619885/id_rsa Username:docker}
	I1018 09:42:49.533547  352186 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:42:49.600084  352186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:42:49.616161  352186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:42:49.642271  352186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:42:49.826110  352186 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 09:42:49.827420  352186 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-619885" to be "Ready" ...
	I1018 09:42:50.040211  352186 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:42:50.041455  352186 addons.go:514] duration metric: took 612.932296ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:42:50.330597  352186 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-619885" context rescaled to 1 replicas
	W1018 09:42:51.832069  352186 node_ready.go:57] node "old-k8s-version-619885" has "Ready":"False" status (will retry)
	I1018 09:42:48.740146  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:48.740519  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:42:49.239989  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:49.240446  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:42:49.740062  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:56.306510  356384 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:42:56.306592  356384 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:42:56.306730  356384 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:42:56.306819  356384 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:42:56.306884  356384 kubeadm.go:318] OS: Linux
	I1018 09:42:56.306927  356384 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:42:56.306968  356384 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:42:56.307009  356384 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:42:56.307066  356384 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:42:56.307146  356384 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:42:56.307234  356384 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:42:56.307293  356384 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:42:56.307333  356384 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 09:42:56.307398  356384 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:42:56.307518  356384 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:42:56.307653  356384 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:42:56.307739  356384 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:42:56.309095  356384 out.go:252]   - Generating certificates and keys ...
	I1018 09:42:56.309163  356384 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:42:56.309229  356384 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:42:56.309287  356384 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:42:56.309345  356384 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:42:56.309396  356384 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:42:56.309444  356384 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:42:56.309494  356384 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:42:56.309600  356384 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-589869] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 09:42:56.309698  356384 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:42:56.309884  356384 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-589869] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 09:42:56.309950  356384 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:42:56.310016  356384 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:42:56.310055  356384 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:42:56.310106  356384 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:42:56.310184  356384 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:42:56.310282  356384 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:42:56.310367  356384 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:42:56.310434  356384 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:42:56.310513  356384 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:42:56.310601  356384 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:42:56.310660  356384 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:42:56.312430  356384 out.go:252]   - Booting up control plane ...
	I1018 09:42:56.312510  356384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:42:56.312583  356384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:42:56.312663  356384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:42:56.312769  356384 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:42:56.312872  356384 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:42:56.312966  356384 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:42:56.313042  356384 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:42:56.313076  356384 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:42:56.313191  356384 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:42:56.313279  356384 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:42:56.313333  356384 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001828299s
	I1018 09:42:56.313410  356384 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:42:56.313492  356384 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1018 09:42:56.313579  356384 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:42:56.313660  356384 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:42:56.313719  356384 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.340309574s
	I1018 09:42:56.313818  356384 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.004800013s
	I1018 09:42:56.313930  356384 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001688732s
	I1018 09:42:56.314067  356384 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:42:56.314217  356384 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:42:56.314304  356384 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:42:56.314505  356384 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-589869 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:42:56.314566  356384 kubeadm.go:318] [bootstrap-token] Using token: atql1s.56kw74yf44dlyzs8
	I1018 09:42:56.316346  356384 out.go:252]   - Configuring RBAC rules ...
	I1018 09:42:56.316461  356384 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:42:56.316537  356384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:42:56.316705  356384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:42:56.316840  356384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:42:56.316975  356384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:42:56.317102  356384 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:42:56.317215  356384 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:42:56.317259  356384 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:42:56.317299  356384 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:42:56.317305  356384 kubeadm.go:318] 
	I1018 09:42:56.317354  356384 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:42:56.317363  356384 kubeadm.go:318] 
	I1018 09:42:56.317442  356384 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:42:56.317452  356384 kubeadm.go:318] 
	I1018 09:42:56.317480  356384 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:42:56.317543  356384 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:42:56.317600  356384 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:42:56.317609  356384 kubeadm.go:318] 
	I1018 09:42:56.317654  356384 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:42:56.317659  356384 kubeadm.go:318] 
	I1018 09:42:56.317698  356384 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:42:56.317704  356384 kubeadm.go:318] 
	I1018 09:42:56.317746  356384 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:42:56.317857  356384 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:42:56.317918  356384 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:42:56.317924  356384 kubeadm.go:318] 
	I1018 09:42:56.317997  356384 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:42:56.318105  356384 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:42:56.318119  356384 kubeadm.go:318] 
	I1018 09:42:56.318222  356384 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token atql1s.56kw74yf44dlyzs8 \
	I1018 09:42:56.318338  356384 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f \
	I1018 09:42:56.318378  356384 kubeadm.go:318] 	--control-plane 
	I1018 09:42:56.318389  356384 kubeadm.go:318] 
	I1018 09:42:56.318465  356384 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:42:56.318472  356384 kubeadm.go:318] 
	I1018 09:42:56.318559  356384 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token atql1s.56kw74yf44dlyzs8 \
	I1018 09:42:56.318655  356384 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f 
	I1018 09:42:56.318683  356384 cni.go:84] Creating CNI manager for ""
	I1018 09:42:56.318692  356384 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:42:56.319950  356384 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1018 09:42:54.330847  352186 node_ready.go:57] node "old-k8s-version-619885" has "Ready":"False" status (will retry)
	W1018 09:42:56.830703  352186 node_ready.go:57] node "old-k8s-version-619885" has "Ready":"False" status (will retry)
	I1018 09:42:54.741268  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1018 09:42:54.741306  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:56.321094  356384 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:42:56.325446  356384 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:42:56.325460  356384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:42:56.339214  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:42:56.546033  356384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:42:56.546104  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:56.546171  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-589869 minikube.k8s.io/updated_at=2025_10_18T09_42_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=no-preload-589869 minikube.k8s.io/primary=true
	I1018 09:42:56.625893  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:56.625893  356384 ops.go:34] apiserver oom_adj: -16
	I1018 09:42:57.126939  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:57.626558  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:58.126718  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:58.625925  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:59.126379  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:59.625969  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:43:00.126809  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:43:00.626644  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:43:01.126006  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:43:01.197635  356384 kubeadm.go:1113] duration metric: took 4.651575458s to wait for elevateKubeSystemPrivileges
	I1018 09:43:01.197671  356384 kubeadm.go:402] duration metric: took 15.638525769s to StartCluster
	I1018 09:43:01.197696  356384 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:43:01.197794  356384 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:43:01.199265  356384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:43:01.199493  356384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:43:01.199500  356384 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:43:01.199556  356384 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:43:01.199670  356384 addons.go:69] Setting storage-provisioner=true in profile "no-preload-589869"
	I1018 09:43:01.199678  356384 config.go:182] Loaded profile config "no-preload-589869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:43:01.199688  356384 addons.go:69] Setting default-storageclass=true in profile "no-preload-589869"
	I1018 09:43:01.199713  356384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-589869"
	I1018 09:43:01.199692  356384 addons.go:238] Setting addon storage-provisioner=true in "no-preload-589869"
	I1018 09:43:01.199752  356384 host.go:66] Checking if "no-preload-589869" exists ...
	I1018 09:43:01.200158  356384 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:01.200328  356384 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:01.204345  356384 out.go:179] * Verifying Kubernetes components...
	I1018 09:43:01.209303  356384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:43:01.221767  356384 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:43:01.222154  356384 addons.go:238] Setting addon default-storageclass=true in "no-preload-589869"
	I1018 09:43:01.222198  356384 host.go:66] Checking if "no-preload-589869" exists ...
	I1018 09:43:01.222744  356384 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:01.223004  356384 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:43:01.223022  356384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:43:01.223106  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:01.244587  356384 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:43:01.244624  356384 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:43:01.244685  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:01.250000  356384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:01.273407  356384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:01.293153  356384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:43:01.354955  356384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:43:01.368965  356384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:43:01.392388  356384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:43:01.477719  356384 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1018 09:43:01.478900  356384 node_ready.go:35] waiting up to 6m0s for node "no-preload-589869" to be "Ready" ...
	I1018 09:43:01.668511  356384 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1018 09:42:58.831596  352186 node_ready.go:57] node "old-k8s-version-619885" has "Ready":"False" status (will retry)
	W1018 09:43:01.331652  352186 node_ready.go:57] node "old-k8s-version-619885" has "Ready":"False" status (will retry)
	I1018 09:42:59.742949  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1018 09:42:59.742995  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:01.669701  356384 addons.go:514] duration metric: took 470.141667ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:43:01.981903  356384 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-589869" context rescaled to 1 replicas
	W1018 09:43:03.482553  356384 node_ready.go:57] node "no-preload-589869" has "Ready":"False" status (will retry)
	I1018 09:43:02.832950  352186 node_ready.go:49] node "old-k8s-version-619885" is "Ready"
	I1018 09:43:02.832991  352186 node_ready.go:38] duration metric: took 13.005539257s for node "old-k8s-version-619885" to be "Ready" ...
	I1018 09:43:02.833013  352186 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:43:02.833079  352186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:43:02.850541  352186 api_server.go:72] duration metric: took 13.420992388s to wait for apiserver process to appear ...
	I1018 09:43:02.850572  352186 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:43:02.850598  352186 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:43:02.857555  352186 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:43:02.859060  352186 api_server.go:141] control plane version: v1.28.0
	I1018 09:43:02.859092  352186 api_server.go:131] duration metric: took 8.512144ms to wait for apiserver health ...
	I1018 09:43:02.859104  352186 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:43:02.863457  352186 system_pods.go:59] 8 kube-system pods found
	I1018 09:43:02.863494  352186 system_pods.go:61] "coredns-5dd5756b68-wklp4" [666ccb81-9bb0-4ee0-8fe1-8d060091f9b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:02.863504  352186 system_pods.go:61] "etcd-old-k8s-version-619885" [dca6ec98-a949-42a4-9c2a-bc2a1a60f5c2] Running
	I1018 09:43:02.863515  352186 system_pods.go:61] "kindnet-vpnhf" [4dadafc2-f316-4101-b535-142210628ad3] Running
	I1018 09:43:02.863523  352186 system_pods.go:61] "kube-apiserver-old-k8s-version-619885" [09a3e16e-e8bd-4c4f-9605-65df6c74f6df] Running
	I1018 09:43:02.863530  352186 system_pods.go:61] "kube-controller-manager-old-k8s-version-619885" [07f81e84-301e-4a8b-9b8e-3c04e4325a0f] Running
	I1018 09:43:02.863540  352186 system_pods.go:61] "kube-proxy-spkr8" [74de2fd0-602e-4deb-942b-b2d6236b4472] Running
	I1018 09:43:02.863547  352186 system_pods.go:61] "kube-scheduler-old-k8s-version-619885" [688ca738-cfaf-4503-90db-35314c08ac6d] Running
	I1018 09:43:02.863555  352186 system_pods.go:61] "storage-provisioner" [398d98bd-a962-40a6-ba34-a3d0a5ea35ca] Pending
	I1018 09:43:02.863564  352186 system_pods.go:74] duration metric: took 4.452537ms to wait for pod list to return data ...
	I1018 09:43:02.863578  352186 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:43:02.866277  352186 default_sa.go:45] found service account: "default"
	I1018 09:43:02.866301  352186 default_sa.go:55] duration metric: took 2.715282ms for default service account to be created ...
	I1018 09:43:02.866313  352186 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:43:02.870155  352186 system_pods.go:86] 8 kube-system pods found
	I1018 09:43:02.870191  352186 system_pods.go:89] "coredns-5dd5756b68-wklp4" [666ccb81-9bb0-4ee0-8fe1-8d060091f9b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:02.870202  352186 system_pods.go:89] "etcd-old-k8s-version-619885" [dca6ec98-a949-42a4-9c2a-bc2a1a60f5c2] Running
	I1018 09:43:02.870209  352186 system_pods.go:89] "kindnet-vpnhf" [4dadafc2-f316-4101-b535-142210628ad3] Running
	I1018 09:43:02.870215  352186 system_pods.go:89] "kube-apiserver-old-k8s-version-619885" [09a3e16e-e8bd-4c4f-9605-65df6c74f6df] Running
	I1018 09:43:02.870221  352186 system_pods.go:89] "kube-controller-manager-old-k8s-version-619885" [07f81e84-301e-4a8b-9b8e-3c04e4325a0f] Running
	I1018 09:43:02.870227  352186 system_pods.go:89] "kube-proxy-spkr8" [74de2fd0-602e-4deb-942b-b2d6236b4472] Running
	I1018 09:43:02.870240  352186 system_pods.go:89] "kube-scheduler-old-k8s-version-619885" [688ca738-cfaf-4503-90db-35314c08ac6d] Running
	I1018 09:43:02.870248  352186 system_pods.go:89] "storage-provisioner" [398d98bd-a962-40a6-ba34-a3d0a5ea35ca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:43:02.870274  352186 retry.go:31] will retry after 293.232434ms: missing components: kube-dns
	I1018 09:43:03.169427  352186 system_pods.go:86] 8 kube-system pods found
	I1018 09:43:03.169471  352186 system_pods.go:89] "coredns-5dd5756b68-wklp4" [666ccb81-9bb0-4ee0-8fe1-8d060091f9b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:03.169482  352186 system_pods.go:89] "etcd-old-k8s-version-619885" [dca6ec98-a949-42a4-9c2a-bc2a1a60f5c2] Running
	I1018 09:43:03.169490  352186 system_pods.go:89] "kindnet-vpnhf" [4dadafc2-f316-4101-b535-142210628ad3] Running
	I1018 09:43:03.169496  352186 system_pods.go:89] "kube-apiserver-old-k8s-version-619885" [09a3e16e-e8bd-4c4f-9605-65df6c74f6df] Running
	I1018 09:43:03.169501  352186 system_pods.go:89] "kube-controller-manager-old-k8s-version-619885" [07f81e84-301e-4a8b-9b8e-3c04e4325a0f] Running
	I1018 09:43:03.169506  352186 system_pods.go:89] "kube-proxy-spkr8" [74de2fd0-602e-4deb-942b-b2d6236b4472] Running
	I1018 09:43:03.169511  352186 system_pods.go:89] "kube-scheduler-old-k8s-version-619885" [688ca738-cfaf-4503-90db-35314c08ac6d] Running
	I1018 09:43:03.169520  352186 system_pods.go:89] "storage-provisioner" [398d98bd-a962-40a6-ba34-a3d0a5ea35ca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:43:03.169540  352186 retry.go:31] will retry after 294.260183ms: missing components: kube-dns
	I1018 09:43:03.468244  352186 system_pods.go:86] 8 kube-system pods found
	I1018 09:43:03.468273  352186 system_pods.go:89] "coredns-5dd5756b68-wklp4" [666ccb81-9bb0-4ee0-8fe1-8d060091f9b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:03.468279  352186 system_pods.go:89] "etcd-old-k8s-version-619885" [dca6ec98-a949-42a4-9c2a-bc2a1a60f5c2] Running
	I1018 09:43:03.468286  352186 system_pods.go:89] "kindnet-vpnhf" [4dadafc2-f316-4101-b535-142210628ad3] Running
	I1018 09:43:03.468290  352186 system_pods.go:89] "kube-apiserver-old-k8s-version-619885" [09a3e16e-e8bd-4c4f-9605-65df6c74f6df] Running
	I1018 09:43:03.468293  352186 system_pods.go:89] "kube-controller-manager-old-k8s-version-619885" [07f81e84-301e-4a8b-9b8e-3c04e4325a0f] Running
	I1018 09:43:03.468297  352186 system_pods.go:89] "kube-proxy-spkr8" [74de2fd0-602e-4deb-942b-b2d6236b4472] Running
	I1018 09:43:03.468300  352186 system_pods.go:89] "kube-scheduler-old-k8s-version-619885" [688ca738-cfaf-4503-90db-35314c08ac6d] Running
	I1018 09:43:03.468304  352186 system_pods.go:89] "storage-provisioner" [398d98bd-a962-40a6-ba34-a3d0a5ea35ca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:43:03.468318  352186 retry.go:31] will retry after 321.22082ms: missing components: kube-dns
	I1018 09:43:03.793422  352186 system_pods.go:86] 8 kube-system pods found
	I1018 09:43:03.793454  352186 system_pods.go:89] "coredns-5dd5756b68-wklp4" [666ccb81-9bb0-4ee0-8fe1-8d060091f9b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:03.793460  352186 system_pods.go:89] "etcd-old-k8s-version-619885" [dca6ec98-a949-42a4-9c2a-bc2a1a60f5c2] Running
	I1018 09:43:03.793465  352186 system_pods.go:89] "kindnet-vpnhf" [4dadafc2-f316-4101-b535-142210628ad3] Running
	I1018 09:43:03.793469  352186 system_pods.go:89] "kube-apiserver-old-k8s-version-619885" [09a3e16e-e8bd-4c4f-9605-65df6c74f6df] Running
	I1018 09:43:03.793475  352186 system_pods.go:89] "kube-controller-manager-old-k8s-version-619885" [07f81e84-301e-4a8b-9b8e-3c04e4325a0f] Running
	I1018 09:43:03.793480  352186 system_pods.go:89] "kube-proxy-spkr8" [74de2fd0-602e-4deb-942b-b2d6236b4472] Running
	I1018 09:43:03.793485  352186 system_pods.go:89] "kube-scheduler-old-k8s-version-619885" [688ca738-cfaf-4503-90db-35314c08ac6d] Running
	I1018 09:43:03.793491  352186 system_pods.go:89] "storage-provisioner" [398d98bd-a962-40a6-ba34-a3d0a5ea35ca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:43:03.793511  352186 retry.go:31] will retry after 513.544946ms: missing components: kube-dns
	I1018 09:43:04.311386  352186 system_pods.go:86] 8 kube-system pods found
	I1018 09:43:04.311413  352186 system_pods.go:89] "coredns-5dd5756b68-wklp4" [666ccb81-9bb0-4ee0-8fe1-8d060091f9b0] Running
	I1018 09:43:04.311418  352186 system_pods.go:89] "etcd-old-k8s-version-619885" [dca6ec98-a949-42a4-9c2a-bc2a1a60f5c2] Running
	I1018 09:43:04.311422  352186 system_pods.go:89] "kindnet-vpnhf" [4dadafc2-f316-4101-b535-142210628ad3] Running
	I1018 09:43:04.311425  352186 system_pods.go:89] "kube-apiserver-old-k8s-version-619885" [09a3e16e-e8bd-4c4f-9605-65df6c74f6df] Running
	I1018 09:43:04.311429  352186 system_pods.go:89] "kube-controller-manager-old-k8s-version-619885" [07f81e84-301e-4a8b-9b8e-3c04e4325a0f] Running
	I1018 09:43:04.311432  352186 system_pods.go:89] "kube-proxy-spkr8" [74de2fd0-602e-4deb-942b-b2d6236b4472] Running
	I1018 09:43:04.311435  352186 system_pods.go:89] "kube-scheduler-old-k8s-version-619885" [688ca738-cfaf-4503-90db-35314c08ac6d] Running
	I1018 09:43:04.311438  352186 system_pods.go:89] "storage-provisioner" [398d98bd-a962-40a6-ba34-a3d0a5ea35ca] Running
	I1018 09:43:04.311446  352186 system_pods.go:126] duration metric: took 1.445126187s to wait for k8s-apps to be running ...
	I1018 09:43:04.311453  352186 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:43:04.311496  352186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:43:04.324451  352186 system_svc.go:56] duration metric: took 12.985333ms WaitForService to wait for kubelet
	I1018 09:43:04.324478  352186 kubeadm.go:586] duration metric: took 14.894943514s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:43:04.324494  352186 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:43:04.327090  352186 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:43:04.327112  352186 node_conditions.go:123] node cpu capacity is 8
	I1018 09:43:04.327128  352186 node_conditions.go:105] duration metric: took 2.629403ms to run NodePressure ...
	I1018 09:43:04.327140  352186 start.go:241] waiting for startup goroutines ...
	I1018 09:43:04.327147  352186 start.go:246] waiting for cluster config update ...
	I1018 09:43:04.327156  352186 start.go:255] writing updated cluster config ...
	I1018 09:43:04.327401  352186 ssh_runner.go:195] Run: rm -f paused
	I1018 09:43:04.331219  352186 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:43:04.335281  352186 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-wklp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:04.339341  352186 pod_ready.go:94] pod "coredns-5dd5756b68-wklp4" is "Ready"
	I1018 09:43:04.339360  352186 pod_ready.go:86] duration metric: took 4.058957ms for pod "coredns-5dd5756b68-wklp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:04.342007  352186 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:04.346571  352186 pod_ready.go:94] pod "etcd-old-k8s-version-619885" is "Ready"
	I1018 09:43:04.346599  352186 pod_ready.go:86] duration metric: took 4.567876ms for pod "etcd-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:04.349243  352186 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:04.353054  352186 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-619885" is "Ready"
	I1018 09:43:04.353078  352186 pod_ready.go:86] duration metric: took 3.814596ms for pod "kube-apiserver-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:04.355578  352186 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:04.736236  352186 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-619885" is "Ready"
	I1018 09:43:04.736267  352186 pod_ready.go:86] duration metric: took 380.668197ms for pod "kube-controller-manager-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:04.936030  352186 pod_ready.go:83] waiting for pod "kube-proxy-spkr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:05.334891  352186 pod_ready.go:94] pod "kube-proxy-spkr8" is "Ready"
	I1018 09:43:05.334917  352186 pod_ready.go:86] duration metric: took 398.862319ms for pod "kube-proxy-spkr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:05.535379  352186 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:05.935256  352186 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-619885" is "Ready"
	I1018 09:43:05.935281  352186 pod_ready.go:86] duration metric: took 399.880096ms for pod "kube-scheduler-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:05.935292  352186 pod_ready.go:40] duration metric: took 1.604042189s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:43:05.985690  352186 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1018 09:43:05.987568  352186 out.go:203] 
	W1018 09:43:05.988657  352186 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 09:43:05.989705  352186 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 09:43:05.991209  352186 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-619885" cluster and "default" namespace by default
	I1018 09:43:04.743175  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1018 09:43:04.743209  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	W1018 09:43:05.982397  356384 node_ready.go:57] node "no-preload-589869" has "Ready":"False" status (will retry)
	W1018 09:43:08.482571  356384 node_ready.go:57] node "no-preload-589869" has "Ready":"False" status (will retry)
	I1018 09:43:09.717903  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:40168->192.168.85.2:8443: read: connection reset by peer
	I1018 09:43:09.717956  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:09.718334  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:09.739601  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:09.739996  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:10.239573  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:10.240006  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:10.739645  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:10.740120  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:11.239870  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:11.240230  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:11.739885  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:11.740288  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:12.240017  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:12.240380  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:12.740068  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:12.740479  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:13.239969  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:13.240354  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	W1018 09:43:10.982419  356384 node_ready.go:57] node "no-preload-589869" has "Ready":"False" status (will retry)
	W1018 09:43:13.482267  356384 node_ready.go:57] node "no-preload-589869" has "Ready":"False" status (will retry)
	I1018 09:43:14.482638  356384 node_ready.go:49] node "no-preload-589869" is "Ready"
	I1018 09:43:14.482668  356384 node_ready.go:38] duration metric: took 13.003733019s for node "no-preload-589869" to be "Ready" ...
	I1018 09:43:14.482686  356384 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:43:14.482753  356384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:43:14.498725  356384 api_server.go:72] duration metric: took 13.299189053s to wait for apiserver process to appear ...
	I1018 09:43:14.498760  356384 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:43:14.498798  356384 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 09:43:14.505089  356384 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1018 09:43:14.506197  356384 api_server.go:141] control plane version: v1.34.1
	I1018 09:43:14.506226  356384 api_server.go:131] duration metric: took 7.458167ms to wait for apiserver health ...
	I1018 09:43:14.506237  356384 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:43:14.510161  356384 system_pods.go:59] 8 kube-system pods found
	I1018 09:43:14.510200  356384 system_pods.go:61] "coredns-66bc5c9577-pck54" [602e29ab-ecfb-4629-a801-28c32d870d4a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:14.510209  356384 system_pods.go:61] "etcd-no-preload-589869" [4d5dfb31-d876-4b94-92b6-119124511a9a] Running
	I1018 09:43:14.510219  356384 system_pods.go:61] "kindnet-zjqmf" [f9912369-31bd-48e1-b05e-e623a8b4e541] Running
	I1018 09:43:14.510225  356384 system_pods.go:61] "kube-apiserver-no-preload-589869" [2584bf4b-0c8f-41a7-bc9b-06cb402dc7cf] Running
	I1018 09:43:14.510231  356384 system_pods.go:61] "kube-controller-manager-no-preload-589869" [52f102ff-416e-4a0f-9ba4-60fca43d533e] Running
	I1018 09:43:14.510241  356384 system_pods.go:61] "kube-proxy-45kpn" [1f457398-f624-4d8b-bb01-66d9f3a15033] Running
	I1018 09:43:14.510251  356384 system_pods.go:61] "kube-scheduler-no-preload-589869" [60a71bc7-82e8-4028-98db-d34384b00875] Running
	I1018 09:43:14.510258  356384 system_pods.go:61] "storage-provisioner" [9c851a2c-8320-45ae-9c2f-3f60bc0401c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:43:14.510270  356384 system_pods.go:74] duration metric: took 4.017075ms to wait for pod list to return data ...
	I1018 09:43:14.510284  356384 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:43:14.513176  356384 default_sa.go:45] found service account: "default"
	I1018 09:43:14.513218  356384 default_sa.go:55] duration metric: took 2.926748ms for default service account to be created ...
	I1018 09:43:14.513228  356384 system_pods.go:116] waiting for k8s-apps to be running ...
	
	
	==> CRI-O <==
	Oct 18 09:43:03 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:03.195918243Z" level=info msg="Starting container: 0485b039396bdce2e7c984621d403ce6c2d65a2846e89c60c86d0ceab1fae795" id=84bdfa76-0d3a-4e29-86bc-915433988e06 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:43:03 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:03.197789735Z" level=info msg="Started container" PID=2166 containerID=0485b039396bdce2e7c984621d403ce6c2d65a2846e89c60c86d0ceab1fae795 description=kube-system/coredns-5dd5756b68-wklp4/coredns id=84bdfa76-0d3a-4e29-86bc-915433988e06 name=/runtime.v1.RuntimeService/StartContainer sandboxID=559337afffe7c393e41f97fbd653dcccc57907ea616706b8914cb9a00c3c6c41
	Oct 18 09:43:06 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:06.42533793Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c3adb013-7ea2-4058-844c-03e339759caf name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:43:06 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:06.425436741Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:43:06 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:06.430141589Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7873dea1bf14af2699b7d79486ea40d466f9de311977cf0a666fbc032df302ff UID:2e50d21c-d2e2-4cc7-b111-04c19153fc41 NetNS:/var/run/netns/212d0280-df78-4fa6-a4b5-812a38d19885 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000411840}] Aliases:map[]}"
	Oct 18 09:43:06 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:06.430175277Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:43:06 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:06.43969912Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7873dea1bf14af2699b7d79486ea40d466f9de311977cf0a666fbc032df302ff UID:2e50d21c-d2e2-4cc7-b111-04c19153fc41 NetNS:/var/run/netns/212d0280-df78-4fa6-a4b5-812a38d19885 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000411840}] Aliases:map[]}"
	Oct 18 09:43:06 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:06.439954907Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 09:43:06 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:06.440896272Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:43:06 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:06.442117575Z" level=info msg="Ran pod sandbox 7873dea1bf14af2699b7d79486ea40d466f9de311977cf0a666fbc032df302ff with infra container: default/busybox/POD" id=c3adb013-7ea2-4058-844c-03e339759caf name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:43:06 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:06.443290034Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c097bf6b-c565-40ef-b418-bb7eb91437a6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:43:06 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:06.443417684Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c097bf6b-c565-40ef-b418-bb7eb91437a6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:43:06 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:06.443487851Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c097bf6b-c565-40ef-b418-bb7eb91437a6 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:43:06 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:06.444091968Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d5c1b4cc-6039-4734-a978-c56035e2974a name=/runtime.v1.ImageService/PullImage
	Oct 18 09:43:06 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:06.445684512Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 09:43:08 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:08.523292442Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=d5c1b4cc-6039-4734-a978-c56035e2974a name=/runtime.v1.ImageService/PullImage
	Oct 18 09:43:08 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:08.524155809Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3814d953-841f-4b06-aff8-49da385717da name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:43:08 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:08.525704778Z" level=info msg="Creating container: default/busybox/busybox" id=b09c9734-6268-4dea-b43a-81d4aed3fdf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:43:08 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:08.526464513Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:43:08 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:08.530044487Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:43:08 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:08.530527281Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:43:08 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:08.55250138Z" level=info msg="Created container 794890c627256f6aad6f2c156afccc84f09e7bba2d8be95d77e6e43d728c14ec: default/busybox/busybox" id=b09c9734-6268-4dea-b43a-81d4aed3fdf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:43:08 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:08.553097322Z" level=info msg="Starting container: 794890c627256f6aad6f2c156afccc84f09e7bba2d8be95d77e6e43d728c14ec" id=b60d7f7b-c182-4cb3-9aa7-b6c4fe4928c4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:43:08 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:08.554654997Z" level=info msg="Started container" PID=2241 containerID=794890c627256f6aad6f2c156afccc84f09e7bba2d8be95d77e6e43d728c14ec description=default/busybox/busybox id=b60d7f7b-c182-4cb3-9aa7-b6c4fe4928c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7873dea1bf14af2699b7d79486ea40d466f9de311977cf0a666fbc032df302ff
	Oct 18 09:43:15 old-k8s-version-619885 crio[774]: time="2025-10-18T09:43:15.204400289Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	794890c627256       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   7873dea1bf14a       busybox                                          default
	0485b039396bd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   559337afffe7c       coredns-5dd5756b68-wklp4                         kube-system
	e4793cc365d0e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   c7060f3610d24       storage-provisioner                              kube-system
	adbd28305a8fd       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   be236079f8eaf       kindnet-vpnhf                                    kube-system
	0efd3d4708f16       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      26 seconds ago      Running             kube-proxy                0                   1e7ce2ab28bb3       kube-proxy-spkr8                                 kube-system
	1790df380a4cf       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      45 seconds ago      Running             etcd                      0                   18209e4403a27       etcd-old-k8s-version-619885                      kube-system
	54f3e8c97fe7d       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      45 seconds ago      Running             kube-scheduler            0                   f90c98ffbb907       kube-scheduler-old-k8s-version-619885            kube-system
	d3e681c3a0a0c       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      45 seconds ago      Running             kube-controller-manager   0                   7c5ab57a4d25a       kube-controller-manager-old-k8s-version-619885   kube-system
	614369a9fb7aa       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      45 seconds ago      Running             kube-apiserver            0                   3d86bffaf18c2       kube-apiserver-old-k8s-version-619885            kube-system
	
	
	==> coredns [0485b039396bdce2e7c984621d403ce6c2d65a2846e89c60c86d0ceab1fae795] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56558 - 44905 "HINFO IN 3073081865627045469.6970107506082849132. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.037412544s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-619885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-619885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=old-k8s-version-619885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_42_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:42:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-619885
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:43:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:43:06 +0000   Sat, 18 Oct 2025 09:42:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:43:06 +0000   Sat, 18 Oct 2025 09:42:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:43:06 +0000   Sat, 18 Oct 2025 09:42:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:43:06 +0000   Sat, 18 Oct 2025 09:43:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-619885
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                5fe2f0a1-057b-421d-9214-f38cf6889451
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-wklp4                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-619885                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         42s
	  kube-system                 kindnet-vpnhf                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-619885             250m (3%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-619885    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-spkr8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-619885             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-619885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-619885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-619885 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node old-k8s-version-619885 event: Registered Node old-k8s-version-619885 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-619885 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [1790df380a4cf8f4f0dce2b5c1c4f3fd1b702669ae00bd8850704625a1558db1] <==
	{"level":"info","ts":"2025-10-18T09:42:30.726118Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-18T09:42:30.726278Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-18T09:42:30.730219Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T09:42:30.730441Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T09:42:30.730556Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T09:42:30.731073Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T09:42:30.730939Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T09:42:31.717189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-18T09:42:31.717229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-18T09:42:31.717256Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-10-18T09:42:31.717271Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-10-18T09:42:31.717276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-18T09:42:31.717285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-10-18T09:42:31.717293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-18T09:42:31.71807Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:42:31.718697Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:42:31.718716Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:42:31.718694Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-619885 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T09:42:31.718896Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T09:42:31.719031Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:42:31.719031Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T09:42:31.719123Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:42:31.719149Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:42:31.720554Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-18T09:42:31.720555Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 09:43:16 up  1:25,  0 user,  load average: 4.02, 3.13, 1.80
	Linux old-k8s-version-619885 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [adbd28305a8fd21ec15ba742fd1bb3302c546ff76f34aede92373702e11e22fa] <==
	I1018 09:42:52.287682       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:42:52.310844       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:42:52.310995       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:42:52.311013       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:42:52.311044       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:42:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:42:52.511470       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:42:52.511504       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:42:52.511528       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:42:52.511692       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:42:53.014930       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:42:53.014970       1 metrics.go:72] Registering metrics
	I1018 09:42:53.015058       1 controller.go:711] "Syncing nftables rules"
	I1018 09:43:02.518943       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:43:02.518988       1 main.go:301] handling current node
	I1018 09:43:12.514084       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:43:12.514123       1 main.go:301] handling current node
	
	
	==> kube-apiserver [614369a9fb7aa1a9823734e75aa123d6a613337b74bed8cacfad220bfb979ac2] <==
	I1018 09:42:32.944870       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 09:42:32.945250       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 09:42:32.946166       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 09:42:32.946397       1 aggregator.go:166] initial CRD sync complete...
	I1018 09:42:32.946470       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 09:42:32.946497       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:42:32.946538       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:42:32.947748       1 controller.go:624] quota admission added evaluator for: namespaces
	E1018 09:42:32.948416       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1018 09:42:33.152023       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:42:33.849658       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:42:33.853745       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:42:33.853766       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:42:34.258671       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:42:34.291248       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:42:34.361150       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:42:34.366509       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 09:42:34.367436       1 controller.go:624] quota admission added evaluator for: endpoints
	I1018 09:42:34.371169       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:42:34.928694       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 09:42:35.841894       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 09:42:35.852881       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:42:35.862067       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1018 09:42:49.138056       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 09:42:49.223229       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [d3e681c3a0a0c7cabb4f6d559926ace30538a25d4ec14ad9a53ba635b6e0ba86] <==
	I1018 09:42:49.221306       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="106.946µs"
	I1018 09:42:49.227763       1 shared_informer.go:318] Caches are synced for service account
	I1018 09:42:49.233181       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vpnhf"
	I1018 09:42:49.233780       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-spkr8"
	I1018 09:42:49.233292       1 shared_informer.go:318] Caches are synced for namespace
	I1018 09:42:49.261600       1 shared_informer.go:318] Caches are synced for disruption
	I1018 09:42:49.278492       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1018 09:42:49.278511       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1018 09:42:49.327876       1 shared_informer.go:318] Caches are synced for endpoint
	I1018 09:42:49.331327       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1018 09:42:49.388735       1 shared_informer.go:318] Caches are synced for resource quota
	I1018 09:42:49.390056       1 shared_informer.go:318] Caches are synced for resource quota
	I1018 09:42:49.702676       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:42:49.775654       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:42:49.775693       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 09:42:49.854260       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1018 09:42:49.871873       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-78gtn"
	I1018 09:42:49.881101       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.067042ms"
	I1018 09:42:49.888314       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.157391ms"
	I1018 09:42:49.888426       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.832µs"
	I1018 09:43:02.839676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="113.936µs"
	I1018 09:43:02.855361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.183µs"
	I1018 09:43:04.025371       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.185939ms"
	I1018 09:43:04.025532       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="91.204µs"
	I1018 09:43:04.144314       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [0efd3d4708f169369ac081b90b9de93f6b7f7e96c0576675ee871c205bf018ad] <==
	I1018 09:42:49.715350       1 server_others.go:69] "Using iptables proxy"
	I1018 09:42:49.728033       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1018 09:42:49.761300       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:42:49.763780       1 server_others.go:152] "Using iptables Proxier"
	I1018 09:42:49.763837       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 09:42:49.763847       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 09:42:49.763880       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 09:42:49.764275       1 server.go:846] "Version info" version="v1.28.0"
	I1018 09:42:49.764299       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:42:49.765398       1 config.go:188] "Starting service config controller"
	I1018 09:42:49.772787       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 09:42:49.766943       1 config.go:97] "Starting endpoint slice config controller"
	I1018 09:42:49.773071       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 09:42:49.773101       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1018 09:42:49.767410       1 config.go:315] "Starting node config controller"
	I1018 09:42:49.773160       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 09:42:49.773193       1 shared_informer.go:318] Caches are synced for node config
	I1018 09:42:49.874209       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [54f3e8c97fe7dbe0480b9d75ced915820bf083f1697dd760cc6c68ecfc7cd9c4] <==
	E1018 09:42:32.925672       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1018 09:42:32.925650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1018 09:42:32.925760       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1018 09:42:32.923952       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1018 09:42:32.925797       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1018 09:42:32.923063       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1018 09:42:32.925818       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1018 09:42:32.924577       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1018 09:42:32.925852       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:42:32.924649       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1018 09:42:32.924213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1018 09:42:32.925896       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1018 09:42:32.925385       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1018 09:42:32.925875       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1018 09:42:33.750390       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1018 09:42:33.750426       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1018 09:42:33.889346       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1018 09:42:33.889464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1018 09:42:33.956419       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1018 09:42:33.956460       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:42:33.958681       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1018 09:42:33.958705       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1018 09:42:34.112471       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1018 09:42:34.112505       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1018 09:42:36.116701       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 09:42:49 old-k8s-version-619885 kubelet[1402]: I1018 09:42:49.206423    1402 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 09:42:49 old-k8s-version-619885 kubelet[1402]: I1018 09:42:49.207288    1402 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 09:42:49 old-k8s-version-619885 kubelet[1402]: I1018 09:42:49.243308    1402 topology_manager.go:215] "Topology Admit Handler" podUID="74de2fd0-602e-4deb-942b-b2d6236b4472" podNamespace="kube-system" podName="kube-proxy-spkr8"
	Oct 18 09:42:49 old-k8s-version-619885 kubelet[1402]: I1018 09:42:49.243941    1402 topology_manager.go:215] "Topology Admit Handler" podUID="4dadafc2-f316-4101-b535-142210628ad3" podNamespace="kube-system" podName="kindnet-vpnhf"
	Oct 18 09:42:49 old-k8s-version-619885 kubelet[1402]: I1018 09:42:49.305214    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74de2fd0-602e-4deb-942b-b2d6236b4472-xtables-lock\") pod \"kube-proxy-spkr8\" (UID: \"74de2fd0-602e-4deb-942b-b2d6236b4472\") " pod="kube-system/kube-proxy-spkr8"
	Oct 18 09:42:49 old-k8s-version-619885 kubelet[1402]: I1018 09:42:49.305254    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4dadafc2-f316-4101-b535-142210628ad3-xtables-lock\") pod \"kindnet-vpnhf\" (UID: \"4dadafc2-f316-4101-b535-142210628ad3\") " pod="kube-system/kindnet-vpnhf"
	Oct 18 09:42:49 old-k8s-version-619885 kubelet[1402]: I1018 09:42:49.305276    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4dadafc2-f316-4101-b535-142210628ad3-lib-modules\") pod \"kindnet-vpnhf\" (UID: \"4dadafc2-f316-4101-b535-142210628ad3\") " pod="kube-system/kindnet-vpnhf"
	Oct 18 09:42:49 old-k8s-version-619885 kubelet[1402]: I1018 09:42:49.305292    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74de2fd0-602e-4deb-942b-b2d6236b4472-lib-modules\") pod \"kube-proxy-spkr8\" (UID: \"74de2fd0-602e-4deb-942b-b2d6236b4472\") " pod="kube-system/kube-proxy-spkr8"
	Oct 18 09:42:49 old-k8s-version-619885 kubelet[1402]: I1018 09:42:49.305321    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfkwb\" (UniqueName: \"kubernetes.io/projected/74de2fd0-602e-4deb-942b-b2d6236b4472-kube-api-access-sfkwb\") pod \"kube-proxy-spkr8\" (UID: \"74de2fd0-602e-4deb-942b-b2d6236b4472\") " pod="kube-system/kube-proxy-spkr8"
	Oct 18 09:42:49 old-k8s-version-619885 kubelet[1402]: I1018 09:42:49.305361    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/74de2fd0-602e-4deb-942b-b2d6236b4472-kube-proxy\") pod \"kube-proxy-spkr8\" (UID: \"74de2fd0-602e-4deb-942b-b2d6236b4472\") " pod="kube-system/kube-proxy-spkr8"
	Oct 18 09:42:49 old-k8s-version-619885 kubelet[1402]: I1018 09:42:49.305390    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4dadafc2-f316-4101-b535-142210628ad3-cni-cfg\") pod \"kindnet-vpnhf\" (UID: \"4dadafc2-f316-4101-b535-142210628ad3\") " pod="kube-system/kindnet-vpnhf"
	Oct 18 09:42:49 old-k8s-version-619885 kubelet[1402]: I1018 09:42:49.305471    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs4dl\" (UniqueName: \"kubernetes.io/projected/4dadafc2-f316-4101-b535-142210628ad3-kube-api-access-qs4dl\") pod \"kindnet-vpnhf\" (UID: \"4dadafc2-f316-4101-b535-142210628ad3\") " pod="kube-system/kindnet-vpnhf"
	Oct 18 09:42:49 old-k8s-version-619885 kubelet[1402]: I1018 09:42:49.977370    1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-spkr8" podStartSLOduration=0.977321156 podCreationTimestamp="2025-10-18 09:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:42:49.977254643 +0000 UTC m=+14.161323049" watchObservedRunningTime="2025-10-18 09:42:49.977321156 +0000 UTC m=+14.161389562"
	Oct 18 09:42:52 old-k8s-version-619885 kubelet[1402]: I1018 09:42:52.990638    1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-vpnhf" podStartSLOduration=1.572364613 podCreationTimestamp="2025-10-18 09:42:49 +0000 UTC" firstStartedPulling="2025-10-18 09:42:49.563454696 +0000 UTC m=+13.747523093" lastFinishedPulling="2025-10-18 09:42:51.981671352 +0000 UTC m=+16.165739750" observedRunningTime="2025-10-18 09:42:52.990426247 +0000 UTC m=+17.174494654" watchObservedRunningTime="2025-10-18 09:42:52.99058127 +0000 UTC m=+17.174649675"
	Oct 18 09:43:02 old-k8s-version-619885 kubelet[1402]: I1018 09:43:02.812403    1402 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 18 09:43:02 old-k8s-version-619885 kubelet[1402]: I1018 09:43:02.840155    1402 topology_manager.go:215] "Topology Admit Handler" podUID="666ccb81-9bb0-4ee0-8fe1-8d060091f9b0" podNamespace="kube-system" podName="coredns-5dd5756b68-wklp4"
	Oct 18 09:43:02 old-k8s-version-619885 kubelet[1402]: I1018 09:43:02.842135    1402 topology_manager.go:215] "Topology Admit Handler" podUID="398d98bd-a962-40a6-ba34-a3d0a5ea35ca" podNamespace="kube-system" podName="storage-provisioner"
	Oct 18 09:43:02 old-k8s-version-619885 kubelet[1402]: I1018 09:43:02.905221    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/398d98bd-a962-40a6-ba34-a3d0a5ea35ca-tmp\") pod \"storage-provisioner\" (UID: \"398d98bd-a962-40a6-ba34-a3d0a5ea35ca\") " pod="kube-system/storage-provisioner"
	Oct 18 09:43:02 old-k8s-version-619885 kubelet[1402]: I1018 09:43:02.905281    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/666ccb81-9bb0-4ee0-8fe1-8d060091f9b0-config-volume\") pod \"coredns-5dd5756b68-wklp4\" (UID: \"666ccb81-9bb0-4ee0-8fe1-8d060091f9b0\") " pod="kube-system/coredns-5dd5756b68-wklp4"
	Oct 18 09:43:02 old-k8s-version-619885 kubelet[1402]: I1018 09:43:02.905390    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm2qg\" (UniqueName: \"kubernetes.io/projected/398d98bd-a962-40a6-ba34-a3d0a5ea35ca-kube-api-access-zm2qg\") pod \"storage-provisioner\" (UID: \"398d98bd-a962-40a6-ba34-a3d0a5ea35ca\") " pod="kube-system/storage-provisioner"
	Oct 18 09:43:02 old-k8s-version-619885 kubelet[1402]: I1018 09:43:02.905476    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6thn\" (UniqueName: \"kubernetes.io/projected/666ccb81-9bb0-4ee0-8fe1-8d060091f9b0-kube-api-access-n6thn\") pod \"coredns-5dd5756b68-wklp4\" (UID: \"666ccb81-9bb0-4ee0-8fe1-8d060091f9b0\") " pod="kube-system/coredns-5dd5756b68-wklp4"
	Oct 18 09:43:04 old-k8s-version-619885 kubelet[1402]: I1018 09:43:04.009857    1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.009773934 podCreationTimestamp="2025-10-18 09:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:43:04.009576416 +0000 UTC m=+28.193644823" watchObservedRunningTime="2025-10-18 09:43:04.009773934 +0000 UTC m=+28.193842367"
	Oct 18 09:43:06 old-k8s-version-619885 kubelet[1402]: I1018 09:43:06.123117    1402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wklp4" podStartSLOduration=17.123055663 podCreationTimestamp="2025-10-18 09:42:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:43:04.018977445 +0000 UTC m=+28.203045852" watchObservedRunningTime="2025-10-18 09:43:06.123055663 +0000 UTC m=+30.307124069"
	Oct 18 09:43:06 old-k8s-version-619885 kubelet[1402]: I1018 09:43:06.123547    1402 topology_manager.go:215] "Topology Admit Handler" podUID="2e50d21c-d2e2-4cc7-b111-04c19153fc41" podNamespace="default" podName="busybox"
	Oct 18 09:43:06 old-k8s-version-619885 kubelet[1402]: I1018 09:43:06.225206    1402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55xz5\" (UniqueName: \"kubernetes.io/projected/2e50d21c-d2e2-4cc7-b111-04c19153fc41-kube-api-access-55xz5\") pod \"busybox\" (UID: \"2e50d21c-d2e2-4cc7-b111-04c19153fc41\") " pod="default/busybox"
	
	
	==> storage-provisioner [e4793cc365d0e36f2732def4f3842591831928d3f8fc019322c8e12c294b61e5] <==
	I1018 09:43:03.203069       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:43:03.213330       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:43:03.213377       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1018 09:43:03.220635       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:43:03.220793       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bfdfbff6-fdd7-49a3-996d-5991eb5b28e9", APIVersion:"v1", ResourceVersion:"396", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-619885_a7f44aea-9829-4d3b-949a-9c7bf299e42b became leader
	I1018 09:43:03.220909       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-619885_a7f44aea-9829-4d3b-949a-9c7bf299e42b!
	I1018 09:43:03.321204       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-619885_a7f44aea-9829-4d3b-949a-9c7bf299e42b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-619885 -n old-k8s-version-619885
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-619885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-589869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-589869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (237.463062ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:43:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-589869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-589869 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-589869 describe deploy/metrics-server -n kube-system: exit status 1 (56.564427ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-589869 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-589869
helpers_test.go:243: (dbg) docker inspect no-preload-589869:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58",
	        "Created": "2025-10-18T09:42:25.517759152Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 356900,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:42:25.559134527Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58/hostname",
	        "HostsPath": "/var/lib/docker/containers/0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58/hosts",
	        "LogPath": "/var/lib/docker/containers/0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58/0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58-json.log",
	        "Name": "/no-preload-589869",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-589869:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-589869",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58",
	                "LowerDir": "/var/lib/docker/overlay2/948c69adddfab1e426f033b76756774629876cf0b876605e1a44fff7c658ed4c-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/948c69adddfab1e426f033b76756774629876cf0b876605e1a44fff7c658ed4c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/948c69adddfab1e426f033b76756774629876cf0b876605e1a44fff7c658ed4c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/948c69adddfab1e426f033b76756774629876cf0b876605e1a44fff7c658ed4c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-589869",
	                "Source": "/var/lib/docker/volumes/no-preload-589869/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-589869",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-589869",
	                "name.minikube.sigs.k8s.io": "no-preload-589869",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79e7f5edaf3d450586282dc6d72737fa8c111e26fe75f4fa075aa522e9e4824a",
	            "SandboxKey": "/var/run/docker/netns/79e7f5edaf3d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33186"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33187"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33190"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33188"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33189"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-589869": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:69:83:a7:2b:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b43a4d9c76b9aa8730370c98575c8c91fc6813136b487c412c5288120a5a3e49",
	                    "EndpointID": "6922197f6bf1060e230411033375b1eedac9fc946777e5a0e8217c2de559c002",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-589869",
	                        "0eccfe69a507"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589869 -n no-preload-589869
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-589869 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-345705 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ ssh     │ -p cilium-345705 sudo crio config                                                                                                                                                                                                             │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ delete  │ -p cilium-345705                                                                                                                                                                                                                              │ cilium-345705             │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p cert-expiration-650496 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-650496    │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ delete  │ -p running-upgrade-896586                                                                                                                                                                                                                     │ running-upgrade-896586    │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p force-systemd-flag-565668 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                   │ force-systemd-flag-565668 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p pause-238319 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                                              │ pause-238319              │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ pause   │ -p pause-238319 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-238319              │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ delete  │ -p pause-238319                                                                                                                                                                                                                               │ pause-238319              │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p cert-options-310417 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:42 UTC │
	│ start   │ -p missing-upgrade-631894 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-631894    │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:42 UTC │
	│ ssh     │ force-systemd-flag-565668 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-565668 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ delete  │ -p force-systemd-flag-565668                                                                                                                                                                                                                  │ force-systemd-flag-565668 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:42 UTC │
	│ ssh     │ cert-options-310417 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ ssh     │ -p cert-options-310417 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ delete  │ -p cert-options-310417                                                                                                                                                                                                                        │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ stop    │ -p kubernetes-upgrade-919613                                                                                                                                                                                                                  │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ start   │ -p old-k8s-version-619885 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │                     │
	│ delete  │ -p missing-upgrade-631894                                                                                                                                                                                                                     │ missing-upgrade-631894    │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ start   │ -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-619885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	│ stop    │ -p old-k8s-version-619885 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-589869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:42:24
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:42:24.595022  356384 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:42:24.595321  356384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:42:24.595335  356384 out.go:374] Setting ErrFile to fd 2...
	I1018 09:42:24.595342  356384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:42:24.595686  356384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:42:24.596226  356384 out.go:368] Setting JSON to false
	I1018 09:42:24.597306  356384 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5089,"bootTime":1760775456,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:42:24.597409  356384 start.go:141] virtualization: kvm guest
	I1018 09:42:24.599457  356384 out.go:179] * [no-preload-589869] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:42:24.600502  356384 notify.go:220] Checking for updates...
	I1018 09:42:24.600680  356384 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:42:24.602226  356384 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:42:24.603392  356384 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:42:24.607059  356384 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:42:24.608262  356384 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:42:24.609402  356384 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:42:24.610779  356384 config.go:182] Loaded profile config "cert-expiration-650496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:42:24.610912  356384 config.go:182] Loaded profile config "kubernetes-upgrade-919613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:42:24.611008  356384 config.go:182] Loaded profile config "old-k8s-version-619885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:42:24.611092  356384 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:42:24.638131  356384 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:42:24.638294  356384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:42:24.702675  356384 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 09:42:24.691965323 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:42:24.702843  356384 docker.go:318] overlay module found
	I1018 09:42:24.704777  356384 out.go:179] * Using the docker driver based on user configuration
	I1018 09:42:24.705981  356384 start.go:305] selected driver: docker
	I1018 09:42:24.705998  356384 start.go:925] validating driver "docker" against <nil>
	I1018 09:42:24.706011  356384 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:42:24.706561  356384 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:42:24.768569  356384 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 09:42:24.757198453 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:42:24.768763  356384 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:42:24.769068  356384 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:42:24.771255  356384 out.go:179] * Using Docker driver with root privileges
	I1018 09:42:24.773054  356384 cni.go:84] Creating CNI manager for ""
	I1018 09:42:24.773130  356384 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:42:24.773142  356384 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:42:24.773215  356384 start.go:349] cluster config:
	{Name:no-preload-589869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:42:24.774434  356384 out.go:179] * Starting "no-preload-589869" primary control-plane node in "no-preload-589869" cluster
	I1018 09:42:24.775575  356384 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:42:24.776636  356384 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:42:24.777631  356384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:42:24.777674  356384 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:42:24.777787  356384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/config.json ...
	I1018 09:42:24.777863  356384 cache.go:107] acquiring lock: {Name:mk8d380524b774b5edadec7411def9ea12a01591 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.777866  356384 cache.go:107] acquiring lock: {Name:mka90deb6de3b7e19386c6d0f0785fc3e96d2e63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.777956  356384 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 09:42:24.777968  356384 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 117.174µs
	I1018 09:42:24.777984  356384 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 09:42:24.777950  356384 cache.go:107] acquiring lock: {Name:mk9ad0aa9744bfc6007683a43233309af99e2ada Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.778000  356384 cache.go:107] acquiring lock: {Name:mk2f4cf60554cd9991205940f1aa9911f9bb383a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.777992  356384 cache.go:107] acquiring lock: {Name:mk3d292d197011122be585423e2f701ad4e9ea53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.778027  356384 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:24.778028  356384 cache.go:107] acquiring lock: {Name:mka2dd49281e4623d770ed33d958b114b7cc789f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.778122  356384 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:24.778150  356384 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1018 09:42:24.778148  356384 cache.go:107] acquiring lock: {Name:mk61b8919142cd1b35d71e72ba258fc114b79f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.778199  356384 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:24.778245  356384 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:24.778333  356384 cache.go:107] acquiring lock: {Name:mka49eac321c9a155354693a3a6be91b02cdc4a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.778365  356384 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:24.778408  356384 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:24.777859  356384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/config.json: {Name:mk65166fc402595ea5b7b4ecb3249b12bd86a17d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:24.779855  356384 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:24.779936  356384 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1018 09:42:24.779861  356384 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:24.779856  356384 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:24.779895  356384 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:24.779996  356384 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:24.780150  356384 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:24.805746  356384 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:42:24.805771  356384 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:42:24.805792  356384 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:42:24.805832  356384 start.go:360] acquireMachinesLock for no-preload-589869: {Name:mk63da8322dd3ab3d8f833b8b716fde137314571 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:42:24.805944  356384 start.go:364] duration metric: took 89.937µs to acquireMachinesLock for "no-preload-589869"
	I1018 09:42:24.805973  356384 start.go:93] Provisioning new machine with config: &{Name:no-preload-589869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:42:24.806072  356384 start.go:125] createHost starting for "" (driver="docker")
	I1018 09:42:23.593525  352186 cli_runner.go:164] Run: docker network inspect old-k8s-version-619885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:42:23.610580  352186 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:42:23.614757  352186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:42:23.682896  352186 kubeadm.go:883] updating cluster {Name:old-k8s-version-619885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-619885 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:42:23.683025  352186 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 09:42:23.683108  352186 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:42:23.824896  352186 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:42:23.824922  352186 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:42:23.824990  352186 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:42:23.853315  352186 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:42:23.853335  352186 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:42:23.853344  352186 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1018 09:42:23.853454  352186 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-619885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-619885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:42:23.853537  352186 ssh_runner.go:195] Run: crio config
	I1018 09:42:23.909299  352186 cni.go:84] Creating CNI manager for ""
	I1018 09:42:23.909324  352186 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:42:23.909345  352186 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:42:23.909420  352186 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-619885 NodeName:old-k8s-version-619885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:42:23.909575  352186 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-619885"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:42:23.909641  352186 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1018 09:42:23.920088  352186 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:42:23.920152  352186 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:42:23.929625  352186 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1018 09:42:23.951016  352186 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:42:23.970921  352186 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1018 09:42:23.983549  352186 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:42:23.987586  352186 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:42:23.997774  352186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:42:24.115707  352186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:42:24.137573  352186 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885 for IP: 192.168.76.2
	I1018 09:42:24.137598  352186 certs.go:195] generating shared ca certs ...
	I1018 09:42:24.137633  352186 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:24.137797  352186 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:42:24.137868  352186 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:42:24.137883  352186 certs.go:257] generating profile certs ...
	I1018 09:42:24.137952  352186 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.key
	I1018 09:42:24.137977  352186 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.crt with IP's: []
	I1018 09:42:24.654726  352186 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.crt ...
	I1018 09:42:24.654763  352186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.crt: {Name:mkbedca19eb398c9621a3ec385979fbd97e31283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:24.655003  352186 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.key ...
	I1018 09:42:24.655030  352186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.key: {Name:mkb17d76dd188c4bceebac6fb7f8c290bd94c55b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:24.655188  352186 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.key.1eba7d00
	I1018 09:42:24.655219  352186 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.crt.1eba7d00 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1018 09:42:25.167779  352186 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.crt.1eba7d00 ...
	I1018 09:42:25.167812  352186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.crt.1eba7d00: {Name:mk612dc3760272fed390af6cd5dfff2a120b4b79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:25.168020  352186 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.key.1eba7d00 ...
	I1018 09:42:25.168044  352186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.key.1eba7d00: {Name:mk266f10cb7773d7ca7e765ec90aef469fd27911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:25.168173  352186 certs.go:382] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.crt.1eba7d00 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.crt
	I1018 09:42:25.168260  352186 certs.go:386] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.key.1eba7d00 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.key
	I1018 09:42:25.168332  352186 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/proxy-client.key
	I1018 09:42:25.168348  352186 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/proxy-client.crt with IP's: []
	I1018 09:42:25.921619  352186 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/proxy-client.crt ...
	I1018 09:42:25.921660  352186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/proxy-client.crt: {Name:mkeeb24b84c62fb5014c9d501ad16ca2bd32e80e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:25.921870  352186 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/proxy-client.key ...
	I1018 09:42:25.921893  352186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/proxy-client.key: {Name:mk8b9c293f16d81aac5acbfffecc4f1758fa20f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:25.922139  352186 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:42:25.922190  352186 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:42:25.922203  352186 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:42:25.922240  352186 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:42:25.922270  352186 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:42:25.922298  352186 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:42:25.922357  352186 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:42:25.923277  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:42:25.949610  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:42:25.969756  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:42:25.987945  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:42:26.011975  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1018 09:42:26.036783  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:42:26.059055  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:42:26.080933  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:42:26.107087  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:42:26.134959  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:42:26.155611  352186 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:42:26.174912  352186 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:42:26.191449  352186 ssh_runner.go:195] Run: openssl version
	I1018 09:42:26.198158  352186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:42:26.207664  352186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:26.212197  352186 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:26.212253  352186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:26.255247  352186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:42:26.265431  352186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:42:26.274898  352186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:42:26.279026  352186 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:42:26.279080  352186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:42:26.329439  352186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:42:26.341244  352186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:42:26.352695  352186 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:42:26.358603  352186 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:42:26.358668  352186 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:42:26.412688  352186 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:42:26.426086  352186 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:42:26.433668  352186 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:42:26.433740  352186 kubeadm.go:400] StartCluster: {Name:old-k8s-version-619885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-619885 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:42:26.433939  352186 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:42:26.434040  352186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:42:26.476108  352186 cri.go:89] found id: ""
	I1018 09:42:26.476178  352186 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:42:26.487710  352186 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:42:26.501523  352186 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:42:26.501750  352186 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:42:26.512206  352186 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:42:26.512240  352186 kubeadm.go:157] found existing configuration files:
	
	I1018 09:42:26.512294  352186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:42:26.522227  352186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:42:26.522299  352186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:42:26.535560  352186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:42:26.546231  352186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:42:26.546303  352186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:42:26.556582  352186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:42:26.566420  352186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:42:26.566568  352186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:42:26.576682  352186 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:42:26.588704  352186 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:42:26.588784  352186 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:42:26.606910  352186 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:42:26.664947  352186 kubeadm.go:318] [init] Using Kubernetes version: v1.28.0
	I1018 09:42:26.665040  352186 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:42:26.706851  352186 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:42:26.706978  352186 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:42:26.707063  352186 kubeadm.go:318] OS: Linux
	I1018 09:42:26.707119  352186 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:42:26.707194  352186 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:42:26.707259  352186 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:42:26.707328  352186 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:42:26.707398  352186 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:42:26.707474  352186 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:42:26.707543  352186 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:42:26.707613  352186 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 09:42:26.790376  352186 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:42:26.790547  352186 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:42:26.790721  352186 kubeadm.go:318] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1018 09:42:26.952985  352186 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:42:26.957054  352186 out.go:252]   - Generating certificates and keys ...
	I1018 09:42:26.957187  352186 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:42:26.957283  352186 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:42:27.194045  352186 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:42:27.433910  352186 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:42:23.682810  353123 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:42:23.827784  353123 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:42:23.852490  353123 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613 for IP: 192.168.85.2
	I1018 09:42:23.852519  353123 certs.go:195] generating shared ca certs ...
	I1018 09:42:23.852542  353123 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:23.852714  353123 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:42:23.852789  353123 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:42:23.852806  353123 certs.go:257] generating profile certs ...
	I1018 09:42:23.852928  353123 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/client.key
	I1018 09:42:23.852988  353123 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/apiserver.key.354dbbd0
	I1018 09:42:23.853041  353123 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/proxy-client.key
	I1018 09:42:23.853191  353123 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:42:23.853232  353123 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:42:23.853244  353123 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:42:23.853275  353123 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:42:23.853308  353123 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:42:23.853337  353123 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:42:23.853385  353123 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:42:23.854238  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:42:23.874296  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:42:23.895842  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:42:23.917288  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:42:23.940345  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1018 09:42:23.965071  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:42:23.983867  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:42:24.002341  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:42:24.020110  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:42:24.040569  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:42:24.061544  353123 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:42:24.081141  353123 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:42:24.094264  353123 ssh_runner.go:195] Run: openssl version
	I1018 09:42:24.100476  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:42:24.109172  353123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:42:24.112963  353123 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:42:24.113025  353123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:42:24.151850  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:42:24.161775  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:42:24.170994  353123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:42:24.175873  353123 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:42:24.175933  353123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:42:24.214129  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:42:24.222588  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:42:24.231387  353123 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:24.235136  353123 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:24.235189  353123 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:24.271566  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:42:24.280426  353123 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:42:24.284550  353123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:42:24.320880  353123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:42:24.362416  353123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:42:24.410172  353123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:42:24.450102  353123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:42:24.490540  353123 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:42:24.531245  353123 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-919613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-919613 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:42:24.531326  353123 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:42:24.531370  353123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:42:24.564249  353123 cri.go:89] found id: ""
	I1018 09:42:24.564313  353123 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:42:24.573002  353123 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:42:24.573021  353123 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:42:24.573069  353123 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:42:24.581734  353123 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:42:24.582469  353123 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-919613" does not appear in /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:42:24.582868  353123 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-131066/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-919613" cluster setting kubeconfig missing "kubernetes-upgrade-919613" context setting]
	I1018 09:42:24.583575  353123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:24.584355  353123 kapi.go:59] client config for kubernetes-upgrade-919613: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/client.crt", KeyFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/client.key", CAFile:"/home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2819ca0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1018 09:42:24.584955  353123 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1018 09:42:24.584980  353123 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1018 09:42:24.584988  353123 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1018 09:42:24.584995  353123 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1018 09:42:24.585000  353123 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1018 09:42:24.585476  353123 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:42:24.594382  353123 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-18 09:41:59.549628879 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-18 09:42:23.583569144 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.85.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-919613"
	   kubeletExtraArgs:
	-    node-ip: 192.168.85.2
	+    - name: "node-ip"
	+      value: "192.168.85.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.34.1
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1018 09:42:24.594400  353123 kubeadm.go:1160] stopping kube-system containers ...
	I1018 09:42:24.594411  353123 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1018 09:42:24.594459  353123 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:42:24.626345  353123 cri.go:89] found id: ""
	I1018 09:42:24.626418  353123 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1018 09:42:24.663957  353123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:42:24.674070  353123 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Oct 18 09:42 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Oct 18 09:42 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Oct 18 09:42 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Oct 18 09:42 /etc/kubernetes/scheduler.conf
	
	I1018 09:42:24.674188  353123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:42:24.684460  353123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:42:24.693207  353123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:42:24.702074  353123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:42:24.702139  353123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:42:24.710072  353123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:42:24.718230  353123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:42:24.718285  353123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:42:24.728366  353123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:42:24.740347  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:42:24.794858  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:42:26.934659  353123 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.139755661s)
	I1018 09:42:26.934735  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:42:27.109382  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:42:27.163381  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1018 09:42:27.222857  353123 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:42:27.222927  353123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:42:27.723958  353123 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:42:27.739035  353123 api_server.go:72] duration metric: took 516.184716ms to wait for apiserver process to appear ...
	I1018 09:42:27.739066  353123 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:42:27.739088  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:27.739456  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:42:28.239989  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:27.608860  352186 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:42:27.751380  352186 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:42:27.972019  352186 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:42:27.972221  352186 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-619885] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:42:28.173330  352186 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:42:28.173543  352186 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-619885] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:42:28.308805  352186 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:42:28.438420  352186 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:42:28.907675  352186 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:42:28.907758  352186 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:42:28.974488  352186 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:42:29.128735  352186 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:42:29.281752  352186 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:42:29.460756  352186 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:42:29.461515  352186 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:42:29.465451  352186 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:42:24.812105  356384 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 09:42:24.812328  356384 start.go:159] libmachine.API.Create for "no-preload-589869" (driver="docker")
	I1018 09:42:24.812358  356384 client.go:168] LocalClient.Create starting
	I1018 09:42:24.812443  356384 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem
	I1018 09:42:24.812482  356384 main.go:141] libmachine: Decoding PEM data...
	I1018 09:42:24.812502  356384 main.go:141] libmachine: Parsing certificate...
	I1018 09:42:24.812564  356384 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem
	I1018 09:42:24.812595  356384 main.go:141] libmachine: Decoding PEM data...
	I1018 09:42:24.812607  356384 main.go:141] libmachine: Parsing certificate...
	I1018 09:42:24.813055  356384 cli_runner.go:164] Run: docker network inspect no-preload-589869 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:42:24.836406  356384 cli_runner.go:211] docker network inspect no-preload-589869 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:42:24.836479  356384 network_create.go:284] running [docker network inspect no-preload-589869] to gather additional debugging logs...
	I1018 09:42:24.836495  356384 cli_runner.go:164] Run: docker network inspect no-preload-589869
	W1018 09:42:24.857225  356384 cli_runner.go:211] docker network inspect no-preload-589869 returned with exit code 1
	I1018 09:42:24.857252  356384 network_create.go:287] error running [docker network inspect no-preload-589869]: docker network inspect no-preload-589869: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-589869 not found
	I1018 09:42:24.857263  356384 network_create.go:289] output of [docker network inspect no-preload-589869]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-589869 not found
	
	** /stderr **
	I1018 09:42:24.857351  356384 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:42:24.876525  356384 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-feaba2b58bf0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:21:45:2e:39:fd} reservation:<nil>}
	I1018 09:42:24.877044  356384 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-159189dc4cae IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:66:7d:df:93:8a:95} reservation:<nil>}
	I1018 09:42:24.877417  356384 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-34d26817ecdb IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:18:1e:90:91:d0} reservation:<nil>}
	I1018 09:42:24.878137  356384 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f172a0295669 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:92:54:85:1e:fa:a0} reservation:<nil>}
	I1018 09:42:24.878599  356384 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-de47eb429c53 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ea:6f:ec:e2:71:9d} reservation:<nil>}
	I1018 09:42:24.879221  356384 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e8c5a0}
	I1018 09:42:24.879243  356384 network_create.go:124] attempt to create docker network no-preload-589869 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1018 09:42:24.879295  356384 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-589869 no-preload-589869
	I1018 09:42:24.950286  356384 network_create.go:108] docker network no-preload-589869 192.168.94.0/24 created
	I1018 09:42:24.950315  356384 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-589869" container
	I1018 09:42:24.950366  356384 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:42:24.968918  356384 cli_runner.go:164] Run: docker volume create no-preload-589869 --label name.minikube.sigs.k8s.io=no-preload-589869 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:42:24.971754  356384 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1018 09:42:24.977857  356384 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1018 09:42:24.983434  356384 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1018 09:42:24.988577  356384 oci.go:103] Successfully created a docker volume no-preload-589869
	I1018 09:42:24.988645  356384 cli_runner.go:164] Run: docker run --rm --name no-preload-589869-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-589869 --entrypoint /usr/bin/test -v no-preload-589869:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:42:25.009313  356384 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1018 09:42:25.013125  356384 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1018 09:42:25.017201  356384 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1018 09:42:25.068339  356384 cache.go:162] opening:  /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1018 09:42:25.093224  356384 cache.go:157] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1018 09:42:25.093248  356384 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 315.348718ms
	I1018 09:42:25.093259  356384 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 09:42:25.436785  356384 cache.go:157] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 09:42:25.436810  356384 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 658.480781ms
	I1018 09:42:25.436836  356384 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 09:42:25.440048  356384 oci.go:107] Successfully prepared a docker volume no-preload-589869
	I1018 09:42:25.440085  356384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1018 09:42:25.440168  356384 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 09:42:25.440216  356384 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 09:42:25.440265  356384 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:42:25.501191  356384 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-589869 --name no-preload-589869 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-589869 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-589869 --network no-preload-589869 --ip 192.168.94.2 --volume no-preload-589869:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:42:25.785549  356384 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Running}}
	I1018 09:42:25.806603  356384 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:42:25.826306  356384 cli_runner.go:164] Run: docker exec no-preload-589869 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:42:25.874703  356384 oci.go:144] the created container "no-preload-589869" has a running status.
	I1018 09:42:25.874738  356384 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa...
	I1018 09:42:25.935372  356384 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:42:25.963682  356384 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:42:25.982275  356384 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:42:25.982299  356384 kic_runner.go:114] Args: [docker exec --privileged no-preload-589869 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:42:26.035990  356384 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:42:26.057306  356384 machine.go:93] provisionDockerMachine start ...
	I1018 09:42:26.057453  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:26.079047  356384 main.go:141] libmachine: Using SSH client type: native
	I1018 09:42:26.079424  356384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1018 09:42:26.079448  356384 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:42:26.080190  356384 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45294->127.0.0.1:33186: read: connection reset by peer
	I1018 09:42:26.454630  356384 cache.go:157] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 09:42:26.454659  356384 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.676514025s
	I1018 09:42:26.454674  356384 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 09:42:26.488188  356384 cache.go:157] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 09:42:26.488220  356384 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.710237667s
	I1018 09:42:26.488239  356384 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 09:42:26.602646  356384 cache.go:157] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 09:42:26.602680  356384 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.824823422s
	I1018 09:42:26.602698  356384 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 09:42:26.674620  356384 cache.go:157] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 09:42:26.674652  356384 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.896652242s
	I1018 09:42:26.674668  356384 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 09:42:26.964271  356384 cache.go:157] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 09:42:26.964303  356384 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.186368093s
	I1018 09:42:26.964318  356384 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 09:42:26.964345  356384 cache.go:87] Successfully saved all images to host disk.
	I1018 09:42:29.214021  356384 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-589869
	
	I1018 09:42:29.214051  356384 ubuntu.go:182] provisioning hostname "no-preload-589869"
	I1018 09:42:29.214113  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:29.232562  356384 main.go:141] libmachine: Using SSH client type: native
	I1018 09:42:29.232783  356384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1018 09:42:29.232797  356384 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-589869 && echo "no-preload-589869" | sudo tee /etc/hostname
	I1018 09:42:29.375720  356384 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-589869
	
	I1018 09:42:29.375810  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:29.395319  356384 main.go:141] libmachine: Using SSH client type: native
	I1018 09:42:29.395594  356384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1018 09:42:29.395624  356384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-589869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-589869/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-589869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:42:29.532349  356384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:42:29.532381  356384 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:42:29.532403  356384 ubuntu.go:190] setting up certificates
	I1018 09:42:29.532414  356384 provision.go:84] configureAuth start
	I1018 09:42:29.532470  356384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-589869
	I1018 09:42:29.550411  356384 provision.go:143] copyHostCerts
	I1018 09:42:29.550472  356384 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:42:29.550483  356384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:42:29.550555  356384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:42:29.550688  356384 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:42:29.550701  356384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:42:29.550744  356384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:42:29.550851  356384 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:42:29.550873  356384 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:42:29.550912  356384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:42:29.551008  356384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.no-preload-589869 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-589869]
	I1018 09:42:29.707126  356384 provision.go:177] copyRemoteCerts
	I1018 09:42:29.707186  356384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:42:29.707230  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:29.726095  356384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:42:29.823612  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:42:29.843271  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:42:29.861915  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:42:29.880312  356384 provision.go:87] duration metric: took 347.878604ms to configureAuth
	I1018 09:42:29.880343  356384 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:42:29.880536  356384 config.go:182] Loaded profile config "no-preload-589869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:42:29.880662  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:29.899262  356384 main.go:141] libmachine: Using SSH client type: native
	I1018 09:42:29.899477  356384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33186 <nil> <nil>}
	I1018 09:42:29.899494  356384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:42:30.148636  356384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:42:30.148660  356384 machine.go:96] duration metric: took 4.091329665s to provisionDockerMachine
	I1018 09:42:30.148674  356384 client.go:171] duration metric: took 5.336305888s to LocalClient.Create
	I1018 09:42:30.148700  356384 start.go:167] duration metric: took 5.336372221s to libmachine.API.Create "no-preload-589869"
	I1018 09:42:30.148710  356384 start.go:293] postStartSetup for "no-preload-589869" (driver="docker")
	I1018 09:42:30.148733  356384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:42:30.148800  356384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:42:30.148876  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:30.167062  356384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:42:30.266004  356384 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:42:30.269578  356384 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:42:30.269613  356384 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:42:30.269625  356384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:42:30.269681  356384 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:42:30.269867  356384 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:42:30.270008  356384 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:42:30.278467  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:42:30.298168  356384 start.go:296] duration metric: took 149.439969ms for postStartSetup
	I1018 09:42:30.298558  356384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-589869
	I1018 09:42:30.316704  356384 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/config.json ...
	I1018 09:42:30.317019  356384 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:42:30.317075  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:30.335598  356384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:42:30.429171  356384 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:42:30.433788  356384 start.go:128] duration metric: took 5.627697899s to createHost
	I1018 09:42:30.433818  356384 start.go:83] releasing machines lock for "no-preload-589869", held for 5.627859528s
	I1018 09:42:30.433914  356384 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-589869
	I1018 09:42:30.452187  356384 ssh_runner.go:195] Run: cat /version.json
	I1018 09:42:30.452243  356384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:42:30.452259  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:30.452323  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:42:30.470920  356384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:42:30.471681  356384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:42:30.626905  356384 ssh_runner.go:195] Run: systemctl --version
	I1018 09:42:30.633801  356384 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:42:30.671875  356384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:42:30.678001  356384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:42:30.678087  356384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:42:30.713128  356384 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:42:30.713152  356384 start.go:495] detecting cgroup driver to use...
	I1018 09:42:30.713187  356384 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:42:30.713237  356384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:42:30.735929  356384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:42:30.751039  356384 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:42:30.751102  356384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:42:30.769425  356384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:42:30.788446  356384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:42:30.884031  356384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:42:30.988551  356384 docker.go:234] disabling docker service ...
	I1018 09:42:30.988630  356384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:42:31.009026  356384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:42:31.021429  356384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:42:31.116203  356384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:42:31.230784  356384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:42:31.244090  356384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:42:31.258165  356384 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:42:31.258227  356384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:42:31.268244  356384 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:42:31.268301  356384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:42:31.277016  356384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:42:31.285477  356384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:42:31.294193  356384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:42:31.302330  356384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:42:31.310881  356384 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:42:31.323723  356384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:42:31.332397  356384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:42:31.339816  356384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:42:31.347092  356384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:42:31.431256  356384 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:42:31.550445  356384 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:42:31.550516  356384 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:42:31.554909  356384 start.go:563] Will wait 60s for crictl version
	I1018 09:42:31.554981  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:31.558477  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:42:31.583597  356384 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:42:31.583672  356384 ssh_runner.go:195] Run: crio --version
	I1018 09:42:31.613008  356384 ssh_runner.go:195] Run: crio --version
	I1018 09:42:31.648752  356384 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:42:29.468712  352186 out.go:252]   - Booting up control plane ...
	I1018 09:42:29.468884  352186 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:42:29.469005  352186 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:42:29.469098  352186 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:42:29.482809  352186 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:42:29.484245  352186 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:42:29.484328  352186 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:42:29.579665  352186 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1018 09:42:33.243899  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1018 09:42:33.243975  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:31.650056  356384 cli_runner.go:164] Run: docker network inspect no-preload-589869 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:42:31.668381  356384 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 09:42:31.672817  356384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:42:31.683221  356384 kubeadm.go:883] updating cluster {Name:no-preload-589869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:42:31.683341  356384 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:42:31.683385  356384 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:42:31.710541  356384 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1018 09:42:31.710570  356384 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1018 09:42:31.710665  356384 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:31.710678  356384 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:31.710688  356384 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:42:31.710701  356384 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:31.710721  356384 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:31.710732  356384 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:31.710791  356384 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1018 09:42:31.710726  356384 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:31.712031  356384 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:31.712045  356384 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:42:31.712049  356384 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1018 09:42:31.712141  356384 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:31.712143  356384 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:31.712173  356384 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:31.712195  356384 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:31.712208  356384 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:31.846342  356384 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:31.857672  356384 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:31.859209  356384 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:31.871427  356384 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:31.887066  356384 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:31.891007  356384 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1018 09:42:31.891060  356384 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:31.891108  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:31.891723  356384 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:31.905118  356384 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1018 09:42:31.905188  356384 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:31.905245  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:31.911216  356384 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1018 09:42:31.954062  356384 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1018 09:42:31.954110  356384 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:31.954162  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:31.954203  356384 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1018 09:42:31.954161  356384 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1018 09:42:31.954236  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:31.954247  356384 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:31.954255  356384 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:31.954283  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:31.954289  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:31.954317  356384 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1018 09:42:31.954338  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:31.954353  356384 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:31.954374  356384 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1018 09:42:31.954387  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:31.954410  356384 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1018 09:42:31.954441  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:31.959772  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:31.960495  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:31.991385  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:31.991452  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:31.991457  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:31.991507  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1018 09:42:31.991556  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:31.991729  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:31.992404  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:32.033921  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:32.035600  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1018 09:42:32.035635  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1018 09:42:32.035705  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1018 09:42:32.035731  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:32.035792  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1018 09:42:32.036574  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1018 09:42:32.074955  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1018 09:42:32.080819  356384 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1018 09:42:32.080955  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 09:42:32.081033  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1018 09:42:32.085481  356384 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1018 09:42:32.085560  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1018 09:42:32.085565  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1018 09:42:32.085682  356384 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1018 09:42:32.086030  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1018 09:42:32.091246  356384 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1018 09:42:32.091350  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1018 09:42:32.111713  356384 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1018 09:42:32.111815  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 09:42:32.111989  356384 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1018 09:42:32.112015  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1018 09:42:32.114381  356384 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1018 09:42:32.114411  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1018 09:42:32.116597  356384 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1018 09:42:32.116673  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1018 09:42:32.129743  356384 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1018 09:42:32.129774  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1018 09:42:32.129802  356384 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1018 09:42:32.129841  356384 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1018 09:42:32.129865  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1018 09:42:32.129840  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1018 09:42:32.129897  356384 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1018 09:42:32.130018  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 09:42:32.185416  356384 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1018 09:42:32.185453  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1018 09:42:32.242458  356384 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1018 09:42:32.242495  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1018 09:42:32.327527  356384 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1018 09:42:32.327626  356384 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1018 09:42:32.811159  356384 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1018 09:42:32.811207  356384 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1018 09:42:32.811262  356384 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1018 09:42:33.114320  356384 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:42:33.973525  356384 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.162230124s)
	I1018 09:42:33.973560  356384 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1018 09:42:33.973587  356384 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1018 09:42:33.973621  356384 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1018 09:42:33.973676  356384 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:42:33.973718  356384 ssh_runner.go:195] Run: which crictl
	I1018 09:42:33.973638  356384 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1018 09:42:34.582012  352186 kubeadm.go:318] [apiclient] All control plane components are healthy after 5.002586 seconds
	I1018 09:42:34.582208  352186 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:42:34.596416  352186 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:42:35.119690  352186 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:42:35.120028  352186 kubeadm.go:318] [mark-control-plane] Marking the node old-k8s-version-619885 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:42:35.629531  352186 kubeadm.go:318] [bootstrap-token] Using token: 0j8grk.zmi3e1k9gtnd1hr8
	I1018 09:42:35.630798  352186 out.go:252]   - Configuring RBAC rules ...
	I1018 09:42:35.630944  352186 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:42:35.634722  352186 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:42:35.641023  352186 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:42:35.644910  352186 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:42:35.647895  352186 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:42:35.650610  352186 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:42:35.661437  352186 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:42:35.854286  352186 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:42:36.038662  352186 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:42:36.039719  352186 kubeadm.go:318] 
	I1018 09:42:36.039811  352186 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:42:36.039844  352186 kubeadm.go:318] 
	I1018 09:42:36.039969  352186 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:42:36.039991  352186 kubeadm.go:318] 
	I1018 09:42:36.040038  352186 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:42:36.040120  352186 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:42:36.040193  352186 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:42:36.040201  352186 kubeadm.go:318] 
	I1018 09:42:36.040242  352186 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:42:36.040249  352186 kubeadm.go:318] 
	I1018 09:42:36.040290  352186 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:42:36.040319  352186 kubeadm.go:318] 
	I1018 09:42:36.040405  352186 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:42:36.040511  352186 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:42:36.040607  352186 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:42:36.040622  352186 kubeadm.go:318] 
	I1018 09:42:36.040744  352186 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:42:36.040894  352186 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:42:36.040905  352186 kubeadm.go:318] 
	I1018 09:42:36.041033  352186 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 0j8grk.zmi3e1k9gtnd1hr8 \
	I1018 09:42:36.041189  352186 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f \
	I1018 09:42:36.041221  352186 kubeadm.go:318] 	--control-plane 
	I1018 09:42:36.041230  352186 kubeadm.go:318] 
	I1018 09:42:36.041355  352186 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:42:36.041362  352186 kubeadm.go:318] 
	I1018 09:42:36.041469  352186 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 0j8grk.zmi3e1k9gtnd1hr8 \
	I1018 09:42:36.041623  352186 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f 
	I1018 09:42:36.044234  352186 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:42:36.044411  352186 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:42:36.044445  352186 cni.go:84] Creating CNI manager for ""
	I1018 09:42:36.044459  352186 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:42:36.046763  352186 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 09:42:36.048153  352186 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:42:36.052735  352186 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1018 09:42:36.052756  352186 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:42:36.067130  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:42:36.778567  352186 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:42:36.778641  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:36.778774  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-619885 minikube.k8s.io/updated_at=2025_10_18T09_42_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=old-k8s-version-619885 minikube.k8s.io/primary=true
	I1018 09:42:36.850013  352186 ops.go:34] apiserver oom_adj: -16
	I1018 09:42:36.850273  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:37.351083  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:38.247728  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1018 09:42:38.247768  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:35.244924  356384 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.271170545s)
	I1018 09:42:35.244957  356384 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1018 09:42:35.244967  356384 ssh_runner.go:235] Completed: which crictl: (1.271232185s)
	I1018 09:42:35.244984  356384 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 09:42:35.245022  356384 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1018 09:42:35.245027  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:42:36.679295  356384 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.43424504s)
	I1018 09:42:36.679325  356384 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1018 09:42:36.679341  356384 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.43427421s)
	I1018 09:42:36.679415  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:42:36.679352  356384 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 09:42:36.679498  356384 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1018 09:42:37.830377  356384 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.15093625s)
	I1018 09:42:37.830452  356384 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.150925916s)
	I1018 09:42:37.830465  356384 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:42:37.830481  356384 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1018 09:42:37.830517  356384 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 09:42:37.830563  356384 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1018 09:42:39.023076  356384 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.192471894s)
	I1018 09:42:39.023119  356384 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1018 09:42:39.023132  356384 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.192640454s)
	I1018 09:42:39.023153  356384 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1018 09:42:39.023177  356384 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1018 09:42:39.023216  356384 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1018 09:42:39.023261  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1018 09:42:37.850995  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:38.350835  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:38.851071  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:39.350399  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:39.850710  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:40.351032  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:40.851075  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:41.350751  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:41.850466  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:42.350760  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:43.251927  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1018 09:42:43.252003  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:42.446063  356384 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.422825096s)
	I1018 09:42:42.446088  356384 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1018 09:42:42.446158  356384 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (3.422878017s)
	I1018 09:42:42.446190  356384 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1018 09:42:42.446212  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1018 09:42:42.496029  356384 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1018 09:42:42.496082  356384 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1018 09:42:43.048016  356384 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1018 09:42:43.048058  356384 cache_images.go:124] Successfully loaded all cached images
	I1018 09:42:43.048063  356384 cache_images.go:93] duration metric: took 11.337478312s to LoadCachedImages
	I1018 09:42:43.048076  356384 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1018 09:42:43.048172  356384 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-589869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:42:43.048244  356384 ssh_runner.go:195] Run: crio config
	I1018 09:42:43.096290  356384 cni.go:84] Creating CNI manager for ""
	I1018 09:42:43.096312  356384 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:42:43.096331  356384 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:42:43.096353  356384 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-589869 NodeName:no-preload-589869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:42:43.096476  356384 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-589869"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:42:43.096544  356384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:42:43.105087  356384 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1018 09:42:43.105193  356384 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1018 09:42:43.113120  356384 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1018 09:42:43.113137  356384 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1018 09:42:43.113193  356384 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1018 09:42:43.113215  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1018 09:42:43.117293  356384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1018 09:42:43.117329  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1018 09:42:44.014669  356384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:42:44.028101  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1018 09:42:44.032143  356384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1018 09:42:44.032177  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1018 09:42:44.207087  356384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1018 09:42:44.211372  356384 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1018 09:42:44.211401  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1018 09:42:44.384068  356384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:42:44.392671  356384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 09:42:44.407534  356384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:42:44.425058  356384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1018 09:42:44.438393  356384 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:42:44.442242  356384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:42:44.452223  356384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:42:44.531971  356384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:42:44.562070  356384 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869 for IP: 192.168.94.2
	I1018 09:42:44.562093  356384 certs.go:195] generating shared ca certs ...
	I1018 09:42:44.562115  356384 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:44.562270  356384 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:42:44.562313  356384 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:42:44.562324  356384 certs.go:257] generating profile certs ...
	I1018 09:42:44.562376  356384 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.key
	I1018 09:42:44.562389  356384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.crt with IP's: []
	I1018 09:42:42.850651  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:43.350691  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:43.850624  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:44.351073  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:44.851041  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:45.351034  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:45.851220  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:46.350726  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:46.850974  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:47.350558  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:48.255594  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1018 09:42:48.255649  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:48.627598  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:55126->192.168.85.2:8443: read: connection reset by peer
	I1018 09:42:47.850967  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:48.350614  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:48.850374  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:49.351009  352186 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:49.425681  352186 kubeadm.go:1113] duration metric: took 12.647104717s to wait for elevateKubeSystemPrivileges
	I1018 09:42:49.425885  352186 kubeadm.go:402] duration metric: took 22.99214647s to StartCluster
	I1018 09:42:49.425916  352186 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:49.425979  352186 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:42:49.427564  352186 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:49.427925  352186 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:42:49.428368  352186 config.go:182] Loaded profile config "old-k8s-version-619885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:42:49.428518  352186 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:42:49.428636  352186 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-619885"
	I1018 09:42:49.428660  352186 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-619885"
	I1018 09:42:49.428768  352186 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-619885"
	I1018 09:42:49.428666  352186 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-619885"
	I1018 09:42:49.428907  352186 host.go:66] Checking if "old-k8s-version-619885" exists ...
	I1018 09:42:49.429203  352186 cli_runner.go:164] Run: docker container inspect old-k8s-version-619885 --format={{.State.Status}}
	I1018 09:42:49.429436  352186 cli_runner.go:164] Run: docker container inspect old-k8s-version-619885 --format={{.State.Status}}
	I1018 09:42:49.428678  352186 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:42:49.433273  352186 out.go:179] * Verifying Kubernetes components...
	I1018 09:42:49.434601  352186 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:42:49.455161  352186 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:42:44.736618  356384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.crt ...
	I1018 09:42:44.736644  356384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.crt: {Name:mk681b5eaf9c5bbd8adeb1d784233d192b938336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:44.736837  356384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.key ...
	I1018 09:42:44.736857  356384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.key: {Name:mk1c12e71185ce597c6dee95da15e4470786d675 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:44.736953  356384 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.key.3d5af95d
	I1018 09:42:44.736970  356384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.crt.3d5af95d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1018 09:42:45.083161  356384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.crt.3d5af95d ...
	I1018 09:42:45.083188  356384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.crt.3d5af95d: {Name:mk4a75e600fa90a034a8972d87463f87cb5b98a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:45.083343  356384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.key.3d5af95d ...
	I1018 09:42:45.083356  356384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.key.3d5af95d: {Name:mk0e1847f7003315b8d6824ad9a722525cb3c942 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:45.083423  356384 certs.go:382] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.crt.3d5af95d -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.crt
	I1018 09:42:45.083497  356384 certs.go:386] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.key.3d5af95d -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.key
	I1018 09:42:45.083551  356384 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.key
	I1018 09:42:45.083577  356384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.crt with IP's: []
	I1018 09:42:45.157195  356384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.crt ...
	I1018 09:42:45.157221  356384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.crt: {Name:mk59913af5d0eab5bb4250a6620440f15595ef7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:45.157379  356384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.key ...
	I1018 09:42:45.157393  356384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.key: {Name:mk6421ddcf8217af18599b98b316a3f4bbbea80a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:42:45.157561  356384 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:42:45.157603  356384 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:42:45.157613  356384 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:42:45.157633  356384 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:42:45.157660  356384 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:42:45.157682  356384 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:42:45.157723  356384 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:42:45.158380  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:42:45.177690  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:42:45.195208  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:42:45.212733  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:42:45.230282  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:42:45.247450  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:42:45.264949  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:42:45.282007  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:42:45.299203  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:42:45.317947  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:42:45.335528  356384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:42:45.352682  356384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:42:45.365495  356384 ssh_runner.go:195] Run: openssl version
	I1018 09:42:45.372334  356384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:42:45.382688  356384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:45.386878  356384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:45.386953  356384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:42:45.431550  356384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:42:45.440658  356384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:42:45.449777  356384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:42:45.453849  356384 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:42:45.453918  356384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:42:45.488493  356384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:42:45.497428  356384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:42:45.506224  356384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:42:45.510594  356384 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:42:45.510650  356384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:42:45.546210  356384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:42:45.555223  356384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:42:45.559094  356384 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:42:45.559147  356384 kubeadm.go:400] StartCluster: {Name:no-preload-589869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:42:45.559216  356384 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:42:45.559256  356384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:42:45.587090  356384 cri.go:89] found id: ""
	I1018 09:42:45.587185  356384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:42:45.595435  356384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:42:45.603402  356384 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:42:45.603463  356384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:42:45.611282  356384 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:42:45.611307  356384 kubeadm.go:157] found existing configuration files:
	
	I1018 09:42:45.611361  356384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:42:45.618930  356384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:42:45.618987  356384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:42:45.626002  356384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:42:45.633781  356384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:42:45.633850  356384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:42:45.641390  356384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:42:45.649135  356384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:42:45.649183  356384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:42:45.656552  356384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:42:45.664632  356384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:42:45.664710  356384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:42:45.672639  356384 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:42:45.725790  356384 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:42:45.781811  356384 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:42:49.455898  352186 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-619885"
	I1018 09:42:49.455992  352186 host.go:66] Checking if "old-k8s-version-619885" exists ...
	I1018 09:42:49.456406  352186 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:42:49.456422  352186 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:42:49.456433  352186 cli_runner.go:164] Run: docker container inspect old-k8s-version-619885 --format={{.State.Status}}
	I1018 09:42:49.456475  352186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-619885
	I1018 09:42:49.488279  352186 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:42:49.488306  352186 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:42:49.488398  352186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-619885
	I1018 09:42:49.488951  352186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/old-k8s-version-619885/id_rsa Username:docker}
	I1018 09:42:49.515370  352186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33181 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/old-k8s-version-619885/id_rsa Username:docker}
	I1018 09:42:49.533547  352186 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:42:49.600084  352186 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:42:49.616161  352186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:42:49.642271  352186 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:42:49.826110  352186 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1018 09:42:49.827420  352186 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-619885" to be "Ready" ...
	I1018 09:42:50.040211  352186 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:42:50.041455  352186 addons.go:514] duration metric: took 612.932296ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:42:50.330597  352186 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-619885" context rescaled to 1 replicas
	W1018 09:42:51.832069  352186 node_ready.go:57] node "old-k8s-version-619885" has "Ready":"False" status (will retry)
	I1018 09:42:48.740146  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:48.740519  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:42:49.239989  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:49.240446  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:42:49.740062  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:56.306510  356384 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:42:56.306592  356384 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:42:56.306730  356384 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:42:56.306819  356384 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:42:56.306884  356384 kubeadm.go:318] OS: Linux
	I1018 09:42:56.306927  356384 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:42:56.306968  356384 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:42:56.307009  356384 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:42:56.307066  356384 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:42:56.307146  356384 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:42:56.307234  356384 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:42:56.307293  356384 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:42:56.307333  356384 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 09:42:56.307398  356384 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:42:56.307518  356384 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:42:56.307653  356384 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:42:56.307739  356384 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:42:56.309095  356384 out.go:252]   - Generating certificates and keys ...
	I1018 09:42:56.309163  356384 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:42:56.309229  356384 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:42:56.309287  356384 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:42:56.309345  356384 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:42:56.309396  356384 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:42:56.309444  356384 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:42:56.309494  356384 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:42:56.309600  356384 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-589869] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 09:42:56.309698  356384 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:42:56.309884  356384 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-589869] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 09:42:56.309950  356384 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:42:56.310016  356384 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:42:56.310055  356384 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:42:56.310106  356384 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:42:56.310184  356384 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:42:56.310282  356384 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:42:56.310367  356384 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:42:56.310434  356384 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:42:56.310513  356384 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:42:56.310601  356384 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:42:56.310660  356384 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:42:56.312430  356384 out.go:252]   - Booting up control plane ...
	I1018 09:42:56.312510  356384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:42:56.312583  356384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:42:56.312663  356384 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:42:56.312769  356384 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:42:56.312872  356384 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:42:56.312966  356384 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:42:56.313042  356384 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:42:56.313076  356384 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:42:56.313191  356384 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:42:56.313279  356384 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:42:56.313333  356384 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001828299s
	I1018 09:42:56.313410  356384 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:42:56.313492  356384 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1018 09:42:56.313579  356384 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:42:56.313660  356384 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:42:56.313719  356384 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.340309574s
	I1018 09:42:56.313818  356384 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.004800013s
	I1018 09:42:56.313930  356384 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001688732s
	I1018 09:42:56.314067  356384 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:42:56.314217  356384 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:42:56.314304  356384 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:42:56.314505  356384 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-589869 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:42:56.314566  356384 kubeadm.go:318] [bootstrap-token] Using token: atql1s.56kw74yf44dlyzs8
	I1018 09:42:56.316346  356384 out.go:252]   - Configuring RBAC rules ...
	I1018 09:42:56.316461  356384 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:42:56.316537  356384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:42:56.316705  356384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:42:56.316840  356384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:42:56.316975  356384 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:42:56.317102  356384 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:42:56.317215  356384 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:42:56.317259  356384 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:42:56.317299  356384 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:42:56.317305  356384 kubeadm.go:318] 
	I1018 09:42:56.317354  356384 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:42:56.317363  356384 kubeadm.go:318] 
	I1018 09:42:56.317442  356384 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:42:56.317452  356384 kubeadm.go:318] 
	I1018 09:42:56.317480  356384 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:42:56.317543  356384 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:42:56.317600  356384 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:42:56.317609  356384 kubeadm.go:318] 
	I1018 09:42:56.317654  356384 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:42:56.317659  356384 kubeadm.go:318] 
	I1018 09:42:56.317698  356384 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:42:56.317704  356384 kubeadm.go:318] 
	I1018 09:42:56.317746  356384 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:42:56.317857  356384 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:42:56.317918  356384 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:42:56.317924  356384 kubeadm.go:318] 
	I1018 09:42:56.317997  356384 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:42:56.318105  356384 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:42:56.318119  356384 kubeadm.go:318] 
	I1018 09:42:56.318222  356384 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token atql1s.56kw74yf44dlyzs8 \
	I1018 09:42:56.318338  356384 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f \
	I1018 09:42:56.318378  356384 kubeadm.go:318] 	--control-plane 
	I1018 09:42:56.318389  356384 kubeadm.go:318] 
	I1018 09:42:56.318465  356384 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:42:56.318472  356384 kubeadm.go:318] 
	I1018 09:42:56.318559  356384 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token atql1s.56kw74yf44dlyzs8 \
	I1018 09:42:56.318655  356384 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f 
	I1018 09:42:56.318683  356384 cni.go:84] Creating CNI manager for ""
	I1018 09:42:56.318692  356384 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:42:56.319950  356384 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1018 09:42:54.330847  352186 node_ready.go:57] node "old-k8s-version-619885" has "Ready":"False" status (will retry)
	W1018 09:42:56.830703  352186 node_ready.go:57] node "old-k8s-version-619885" has "Ready":"False" status (will retry)
	I1018 09:42:54.741268  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1018 09:42:54.741306  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:42:56.321094  356384 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:42:56.325446  356384 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:42:56.325460  356384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:42:56.339214  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:42:56.546033  356384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:42:56.546104  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:56.546171  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-589869 minikube.k8s.io/updated_at=2025_10_18T09_42_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=no-preload-589869 minikube.k8s.io/primary=true
	I1018 09:42:56.625893  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:56.625893  356384 ops.go:34] apiserver oom_adj: -16
	I1018 09:42:57.126939  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:57.626558  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:58.126718  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:58.625925  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:59.126379  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:42:59.625969  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:43:00.126809  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:43:00.626644  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:43:01.126006  356384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:43:01.197635  356384 kubeadm.go:1113] duration metric: took 4.651575458s to wait for elevateKubeSystemPrivileges
	I1018 09:43:01.197671  356384 kubeadm.go:402] duration metric: took 15.638525769s to StartCluster
	I1018 09:43:01.197696  356384 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:43:01.197794  356384 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:43:01.199265  356384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:43:01.199493  356384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:43:01.199500  356384 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:43:01.199556  356384 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:43:01.199670  356384 addons.go:69] Setting storage-provisioner=true in profile "no-preload-589869"
	I1018 09:43:01.199678  356384 config.go:182] Loaded profile config "no-preload-589869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:43:01.199688  356384 addons.go:69] Setting default-storageclass=true in profile "no-preload-589869"
	I1018 09:43:01.199713  356384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-589869"
	I1018 09:43:01.199692  356384 addons.go:238] Setting addon storage-provisioner=true in "no-preload-589869"
	I1018 09:43:01.199752  356384 host.go:66] Checking if "no-preload-589869" exists ...
	I1018 09:43:01.200158  356384 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:01.200328  356384 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:01.204345  356384 out.go:179] * Verifying Kubernetes components...
	I1018 09:43:01.209303  356384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:43:01.221767  356384 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:43:01.222154  356384 addons.go:238] Setting addon default-storageclass=true in "no-preload-589869"
	I1018 09:43:01.222198  356384 host.go:66] Checking if "no-preload-589869" exists ...
	I1018 09:43:01.222744  356384 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:01.223004  356384 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:43:01.223022  356384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:43:01.223106  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:01.244587  356384 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:43:01.244624  356384 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:43:01.244685  356384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:01.250000  356384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:01.273407  356384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33186 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:01.293153  356384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:43:01.354955  356384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:43:01.368965  356384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:43:01.392388  356384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:43:01.477719  356384 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1018 09:43:01.478900  356384 node_ready.go:35] waiting up to 6m0s for node "no-preload-589869" to be "Ready" ...
	I1018 09:43:01.668511  356384 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1018 09:42:58.831596  352186 node_ready.go:57] node "old-k8s-version-619885" has "Ready":"False" status (will retry)
	W1018 09:43:01.331652  352186 node_ready.go:57] node "old-k8s-version-619885" has "Ready":"False" status (will retry)
	I1018 09:42:59.742949  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1018 09:42:59.742995  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:01.669701  356384 addons.go:514] duration metric: took 470.141667ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:43:01.981903  356384 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-589869" context rescaled to 1 replicas
	W1018 09:43:03.482553  356384 node_ready.go:57] node "no-preload-589869" has "Ready":"False" status (will retry)
	I1018 09:43:02.832950  352186 node_ready.go:49] node "old-k8s-version-619885" is "Ready"
	I1018 09:43:02.832991  352186 node_ready.go:38] duration metric: took 13.005539257s for node "old-k8s-version-619885" to be "Ready" ...
	I1018 09:43:02.833013  352186 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:43:02.833079  352186 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:43:02.850541  352186 api_server.go:72] duration metric: took 13.420992388s to wait for apiserver process to appear ...
	I1018 09:43:02.850572  352186 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:43:02.850598  352186 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:43:02.857555  352186 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:43:02.859060  352186 api_server.go:141] control plane version: v1.28.0
	I1018 09:43:02.859092  352186 api_server.go:131] duration metric: took 8.512144ms to wait for apiserver health ...
	I1018 09:43:02.859104  352186 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:43:02.863457  352186 system_pods.go:59] 8 kube-system pods found
	I1018 09:43:02.863494  352186 system_pods.go:61] "coredns-5dd5756b68-wklp4" [666ccb81-9bb0-4ee0-8fe1-8d060091f9b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:02.863504  352186 system_pods.go:61] "etcd-old-k8s-version-619885" [dca6ec98-a949-42a4-9c2a-bc2a1a60f5c2] Running
	I1018 09:43:02.863515  352186 system_pods.go:61] "kindnet-vpnhf" [4dadafc2-f316-4101-b535-142210628ad3] Running
	I1018 09:43:02.863523  352186 system_pods.go:61] "kube-apiserver-old-k8s-version-619885" [09a3e16e-e8bd-4c4f-9605-65df6c74f6df] Running
	I1018 09:43:02.863530  352186 system_pods.go:61] "kube-controller-manager-old-k8s-version-619885" [07f81e84-301e-4a8b-9b8e-3c04e4325a0f] Running
	I1018 09:43:02.863540  352186 system_pods.go:61] "kube-proxy-spkr8" [74de2fd0-602e-4deb-942b-b2d6236b4472] Running
	I1018 09:43:02.863547  352186 system_pods.go:61] "kube-scheduler-old-k8s-version-619885" [688ca738-cfaf-4503-90db-35314c08ac6d] Running
	I1018 09:43:02.863555  352186 system_pods.go:61] "storage-provisioner" [398d98bd-a962-40a6-ba34-a3d0a5ea35ca] Pending
	I1018 09:43:02.863564  352186 system_pods.go:74] duration metric: took 4.452537ms to wait for pod list to return data ...
	I1018 09:43:02.863578  352186 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:43:02.866277  352186 default_sa.go:45] found service account: "default"
	I1018 09:43:02.866301  352186 default_sa.go:55] duration metric: took 2.715282ms for default service account to be created ...
	I1018 09:43:02.866313  352186 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:43:02.870155  352186 system_pods.go:86] 8 kube-system pods found
	I1018 09:43:02.870191  352186 system_pods.go:89] "coredns-5dd5756b68-wklp4" [666ccb81-9bb0-4ee0-8fe1-8d060091f9b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:02.870202  352186 system_pods.go:89] "etcd-old-k8s-version-619885" [dca6ec98-a949-42a4-9c2a-bc2a1a60f5c2] Running
	I1018 09:43:02.870209  352186 system_pods.go:89] "kindnet-vpnhf" [4dadafc2-f316-4101-b535-142210628ad3] Running
	I1018 09:43:02.870215  352186 system_pods.go:89] "kube-apiserver-old-k8s-version-619885" [09a3e16e-e8bd-4c4f-9605-65df6c74f6df] Running
	I1018 09:43:02.870221  352186 system_pods.go:89] "kube-controller-manager-old-k8s-version-619885" [07f81e84-301e-4a8b-9b8e-3c04e4325a0f] Running
	I1018 09:43:02.870227  352186 system_pods.go:89] "kube-proxy-spkr8" [74de2fd0-602e-4deb-942b-b2d6236b4472] Running
	I1018 09:43:02.870240  352186 system_pods.go:89] "kube-scheduler-old-k8s-version-619885" [688ca738-cfaf-4503-90db-35314c08ac6d] Running
	I1018 09:43:02.870248  352186 system_pods.go:89] "storage-provisioner" [398d98bd-a962-40a6-ba34-a3d0a5ea35ca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:43:02.870274  352186 retry.go:31] will retry after 293.232434ms: missing components: kube-dns
	I1018 09:43:03.169427  352186 system_pods.go:86] 8 kube-system pods found
	I1018 09:43:03.169471  352186 system_pods.go:89] "coredns-5dd5756b68-wklp4" [666ccb81-9bb0-4ee0-8fe1-8d060091f9b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:03.169482  352186 system_pods.go:89] "etcd-old-k8s-version-619885" [dca6ec98-a949-42a4-9c2a-bc2a1a60f5c2] Running
	I1018 09:43:03.169490  352186 system_pods.go:89] "kindnet-vpnhf" [4dadafc2-f316-4101-b535-142210628ad3] Running
	I1018 09:43:03.169496  352186 system_pods.go:89] "kube-apiserver-old-k8s-version-619885" [09a3e16e-e8bd-4c4f-9605-65df6c74f6df] Running
	I1018 09:43:03.169501  352186 system_pods.go:89] "kube-controller-manager-old-k8s-version-619885" [07f81e84-301e-4a8b-9b8e-3c04e4325a0f] Running
	I1018 09:43:03.169506  352186 system_pods.go:89] "kube-proxy-spkr8" [74de2fd0-602e-4deb-942b-b2d6236b4472] Running
	I1018 09:43:03.169511  352186 system_pods.go:89] "kube-scheduler-old-k8s-version-619885" [688ca738-cfaf-4503-90db-35314c08ac6d] Running
	I1018 09:43:03.169520  352186 system_pods.go:89] "storage-provisioner" [398d98bd-a962-40a6-ba34-a3d0a5ea35ca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:43:03.169540  352186 retry.go:31] will retry after 294.260183ms: missing components: kube-dns
	I1018 09:43:03.468244  352186 system_pods.go:86] 8 kube-system pods found
	I1018 09:43:03.468273  352186 system_pods.go:89] "coredns-5dd5756b68-wklp4" [666ccb81-9bb0-4ee0-8fe1-8d060091f9b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:03.468279  352186 system_pods.go:89] "etcd-old-k8s-version-619885" [dca6ec98-a949-42a4-9c2a-bc2a1a60f5c2] Running
	I1018 09:43:03.468286  352186 system_pods.go:89] "kindnet-vpnhf" [4dadafc2-f316-4101-b535-142210628ad3] Running
	I1018 09:43:03.468290  352186 system_pods.go:89] "kube-apiserver-old-k8s-version-619885" [09a3e16e-e8bd-4c4f-9605-65df6c74f6df] Running
	I1018 09:43:03.468293  352186 system_pods.go:89] "kube-controller-manager-old-k8s-version-619885" [07f81e84-301e-4a8b-9b8e-3c04e4325a0f] Running
	I1018 09:43:03.468297  352186 system_pods.go:89] "kube-proxy-spkr8" [74de2fd0-602e-4deb-942b-b2d6236b4472] Running
	I1018 09:43:03.468300  352186 system_pods.go:89] "kube-scheduler-old-k8s-version-619885" [688ca738-cfaf-4503-90db-35314c08ac6d] Running
	I1018 09:43:03.468304  352186 system_pods.go:89] "storage-provisioner" [398d98bd-a962-40a6-ba34-a3d0a5ea35ca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:43:03.468318  352186 retry.go:31] will retry after 321.22082ms: missing components: kube-dns
	I1018 09:43:03.793422  352186 system_pods.go:86] 8 kube-system pods found
	I1018 09:43:03.793454  352186 system_pods.go:89] "coredns-5dd5756b68-wklp4" [666ccb81-9bb0-4ee0-8fe1-8d060091f9b0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:03.793460  352186 system_pods.go:89] "etcd-old-k8s-version-619885" [dca6ec98-a949-42a4-9c2a-bc2a1a60f5c2] Running
	I1018 09:43:03.793465  352186 system_pods.go:89] "kindnet-vpnhf" [4dadafc2-f316-4101-b535-142210628ad3] Running
	I1018 09:43:03.793469  352186 system_pods.go:89] "kube-apiserver-old-k8s-version-619885" [09a3e16e-e8bd-4c4f-9605-65df6c74f6df] Running
	I1018 09:43:03.793475  352186 system_pods.go:89] "kube-controller-manager-old-k8s-version-619885" [07f81e84-301e-4a8b-9b8e-3c04e4325a0f] Running
	I1018 09:43:03.793480  352186 system_pods.go:89] "kube-proxy-spkr8" [74de2fd0-602e-4deb-942b-b2d6236b4472] Running
	I1018 09:43:03.793485  352186 system_pods.go:89] "kube-scheduler-old-k8s-version-619885" [688ca738-cfaf-4503-90db-35314c08ac6d] Running
	I1018 09:43:03.793491  352186 system_pods.go:89] "storage-provisioner" [398d98bd-a962-40a6-ba34-a3d0a5ea35ca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:43:03.793511  352186 retry.go:31] will retry after 513.544946ms: missing components: kube-dns
	I1018 09:43:04.311386  352186 system_pods.go:86] 8 kube-system pods found
	I1018 09:43:04.311413  352186 system_pods.go:89] "coredns-5dd5756b68-wklp4" [666ccb81-9bb0-4ee0-8fe1-8d060091f9b0] Running
	I1018 09:43:04.311418  352186 system_pods.go:89] "etcd-old-k8s-version-619885" [dca6ec98-a949-42a4-9c2a-bc2a1a60f5c2] Running
	I1018 09:43:04.311422  352186 system_pods.go:89] "kindnet-vpnhf" [4dadafc2-f316-4101-b535-142210628ad3] Running
	I1018 09:43:04.311425  352186 system_pods.go:89] "kube-apiserver-old-k8s-version-619885" [09a3e16e-e8bd-4c4f-9605-65df6c74f6df] Running
	I1018 09:43:04.311429  352186 system_pods.go:89] "kube-controller-manager-old-k8s-version-619885" [07f81e84-301e-4a8b-9b8e-3c04e4325a0f] Running
	I1018 09:43:04.311432  352186 system_pods.go:89] "kube-proxy-spkr8" [74de2fd0-602e-4deb-942b-b2d6236b4472] Running
	I1018 09:43:04.311435  352186 system_pods.go:89] "kube-scheduler-old-k8s-version-619885" [688ca738-cfaf-4503-90db-35314c08ac6d] Running
	I1018 09:43:04.311438  352186 system_pods.go:89] "storage-provisioner" [398d98bd-a962-40a6-ba34-a3d0a5ea35ca] Running
	I1018 09:43:04.311446  352186 system_pods.go:126] duration metric: took 1.445126187s to wait for k8s-apps to be running ...
	I1018 09:43:04.311453  352186 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:43:04.311496  352186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:43:04.324451  352186 system_svc.go:56] duration metric: took 12.985333ms WaitForService to wait for kubelet
	I1018 09:43:04.324478  352186 kubeadm.go:586] duration metric: took 14.894943514s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:43:04.324494  352186 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:43:04.327090  352186 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:43:04.327112  352186 node_conditions.go:123] node cpu capacity is 8
	I1018 09:43:04.327128  352186 node_conditions.go:105] duration metric: took 2.629403ms to run NodePressure ...
	I1018 09:43:04.327140  352186 start.go:241] waiting for startup goroutines ...
	I1018 09:43:04.327147  352186 start.go:246] waiting for cluster config update ...
	I1018 09:43:04.327156  352186 start.go:255] writing updated cluster config ...
	I1018 09:43:04.327401  352186 ssh_runner.go:195] Run: rm -f paused
	I1018 09:43:04.331219  352186 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:43:04.335281  352186 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-wklp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:04.339341  352186 pod_ready.go:94] pod "coredns-5dd5756b68-wklp4" is "Ready"
	I1018 09:43:04.339360  352186 pod_ready.go:86] duration metric: took 4.058957ms for pod "coredns-5dd5756b68-wklp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:04.342007  352186 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:04.346571  352186 pod_ready.go:94] pod "etcd-old-k8s-version-619885" is "Ready"
	I1018 09:43:04.346599  352186 pod_ready.go:86] duration metric: took 4.567876ms for pod "etcd-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:04.349243  352186 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:04.353054  352186 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-619885" is "Ready"
	I1018 09:43:04.353078  352186 pod_ready.go:86] duration metric: took 3.814596ms for pod "kube-apiserver-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:04.355578  352186 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:04.736236  352186 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-619885" is "Ready"
	I1018 09:43:04.736267  352186 pod_ready.go:86] duration metric: took 380.668197ms for pod "kube-controller-manager-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:04.936030  352186 pod_ready.go:83] waiting for pod "kube-proxy-spkr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:05.334891  352186 pod_ready.go:94] pod "kube-proxy-spkr8" is "Ready"
	I1018 09:43:05.334917  352186 pod_ready.go:86] duration metric: took 398.862319ms for pod "kube-proxy-spkr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:05.535379  352186 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:05.935256  352186 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-619885" is "Ready"
	I1018 09:43:05.935281  352186 pod_ready.go:86] duration metric: took 399.880096ms for pod "kube-scheduler-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:05.935292  352186 pod_ready.go:40] duration metric: took 1.604042189s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:43:05.985690  352186 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1018 09:43:05.987568  352186 out.go:203] 
	W1018 09:43:05.988657  352186 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 09:43:05.989705  352186 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 09:43:05.991209  352186 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-619885" cluster and "default" namespace by default
	I1018 09:43:04.743175  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1018 09:43:04.743209  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	W1018 09:43:05.982397  356384 node_ready.go:57] node "no-preload-589869" has "Ready":"False" status (will retry)
	W1018 09:43:08.482571  356384 node_ready.go:57] node "no-preload-589869" has "Ready":"False" status (will retry)
	I1018 09:43:09.717903  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:40168->192.168.85.2:8443: read: connection reset by peer
	I1018 09:43:09.717956  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:09.718334  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:09.739601  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:09.739996  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:10.239573  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:10.240006  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:10.739645  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:10.740120  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:11.239870  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:11.240230  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:11.739885  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:11.740288  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:12.240017  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:12.240380  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:12.740068  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:12.740479  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:13.239969  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:13.240354  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	W1018 09:43:10.982419  356384 node_ready.go:57] node "no-preload-589869" has "Ready":"False" status (will retry)
	W1018 09:43:13.482267  356384 node_ready.go:57] node "no-preload-589869" has "Ready":"False" status (will retry)
	I1018 09:43:14.482638  356384 node_ready.go:49] node "no-preload-589869" is "Ready"
	I1018 09:43:14.482668  356384 node_ready.go:38] duration metric: took 13.003733019s for node "no-preload-589869" to be "Ready" ...
	I1018 09:43:14.482686  356384 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:43:14.482753  356384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:43:14.498725  356384 api_server.go:72] duration metric: took 13.299189053s to wait for apiserver process to appear ...
	I1018 09:43:14.498760  356384 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:43:14.498798  356384 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 09:43:14.505089  356384 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1018 09:43:14.506197  356384 api_server.go:141] control plane version: v1.34.1
	I1018 09:43:14.506226  356384 api_server.go:131] duration metric: took 7.458167ms to wait for apiserver health ...
	I1018 09:43:14.506237  356384 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:43:14.510161  356384 system_pods.go:59] 8 kube-system pods found
	I1018 09:43:14.510200  356384 system_pods.go:61] "coredns-66bc5c9577-pck54" [602e29ab-ecfb-4629-a801-28c32d870d4a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:14.510209  356384 system_pods.go:61] "etcd-no-preload-589869" [4d5dfb31-d876-4b94-92b6-119124511a9a] Running
	I1018 09:43:14.510219  356384 system_pods.go:61] "kindnet-zjqmf" [f9912369-31bd-48e1-b05e-e623a8b4e541] Running
	I1018 09:43:14.510225  356384 system_pods.go:61] "kube-apiserver-no-preload-589869" [2584bf4b-0c8f-41a7-bc9b-06cb402dc7cf] Running
	I1018 09:43:14.510231  356384 system_pods.go:61] "kube-controller-manager-no-preload-589869" [52f102ff-416e-4a0f-9ba4-60fca43d533e] Running
	I1018 09:43:14.510241  356384 system_pods.go:61] "kube-proxy-45kpn" [1f457398-f624-4d8b-bb01-66d9f3a15033] Running
	I1018 09:43:14.510251  356384 system_pods.go:61] "kube-scheduler-no-preload-589869" [60a71bc7-82e8-4028-98db-d34384b00875] Running
	I1018 09:43:14.510258  356384 system_pods.go:61] "storage-provisioner" [9c851a2c-8320-45ae-9c2f-3f60bc0401c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:43:14.510270  356384 system_pods.go:74] duration metric: took 4.017075ms to wait for pod list to return data ...
	I1018 09:43:14.510284  356384 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:43:14.513176  356384 default_sa.go:45] found service account: "default"
	I1018 09:43:14.513218  356384 default_sa.go:55] duration metric: took 2.926748ms for default service account to be created ...
	I1018 09:43:14.513228  356384 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:43:14.605435  356384 system_pods.go:86] 8 kube-system pods found
	I1018 09:43:14.605484  356384 system_pods.go:89] "coredns-66bc5c9577-pck54" [602e29ab-ecfb-4629-a801-28c32d870d4a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:14.605496  356384 system_pods.go:89] "etcd-no-preload-589869" [4d5dfb31-d876-4b94-92b6-119124511a9a] Running
	I1018 09:43:14.605505  356384 system_pods.go:89] "kindnet-zjqmf" [f9912369-31bd-48e1-b05e-e623a8b4e541] Running
	I1018 09:43:14.605511  356384 system_pods.go:89] "kube-apiserver-no-preload-589869" [2584bf4b-0c8f-41a7-bc9b-06cb402dc7cf] Running
	I1018 09:43:14.605524  356384 system_pods.go:89] "kube-controller-manager-no-preload-589869" [52f102ff-416e-4a0f-9ba4-60fca43d533e] Running
	I1018 09:43:14.605528  356384 system_pods.go:89] "kube-proxy-45kpn" [1f457398-f624-4d8b-bb01-66d9f3a15033] Running
	I1018 09:43:14.605534  356384 system_pods.go:89] "kube-scheduler-no-preload-589869" [60a71bc7-82e8-4028-98db-d34384b00875] Running
	I1018 09:43:14.605543  356384 system_pods.go:89] "storage-provisioner" [9c851a2c-8320-45ae-9c2f-3f60bc0401c8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:43:14.605554  356384 system_pods.go:126] duration metric: took 92.319884ms to wait for k8s-apps to be running ...
	I1018 09:43:14.605570  356384 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:43:14.605622  356384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:43:14.623319  356384 system_svc.go:56] duration metric: took 17.73688ms WaitForService to wait for kubelet
	I1018 09:43:14.623353  356384 kubeadm.go:586] duration metric: took 13.423827058s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:43:14.623377  356384 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:43:14.625575  356384 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:43:14.625597  356384 node_conditions.go:123] node cpu capacity is 8
	I1018 09:43:14.625610  356384 node_conditions.go:105] duration metric: took 2.227978ms to run NodePressure ...
	I1018 09:43:14.625623  356384 start.go:241] waiting for startup goroutines ...
	I1018 09:43:14.625633  356384 start.go:246] waiting for cluster config update ...
	I1018 09:43:14.625648  356384 start.go:255] writing updated cluster config ...
	I1018 09:43:14.625963  356384 ssh_runner.go:195] Run: rm -f paused
	I1018 09:43:14.630861  356384 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:43:14.634331  356384 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pck54" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:14.638089  356384 pod_ready.go:94] pod "coredns-66bc5c9577-pck54" is "Ready"
	I1018 09:43:14.638108  356384 pod_ready.go:86] duration metric: took 3.744539ms for pod "coredns-66bc5c9577-pck54" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:14.640196  356384 pod_ready.go:83] waiting for pod "etcd-no-preload-589869" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:14.644093  356384 pod_ready.go:94] pod "etcd-no-preload-589869" is "Ready"
	I1018 09:43:14.644115  356384 pod_ready.go:86] duration metric: took 3.89609ms for pod "etcd-no-preload-589869" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:14.645962  356384 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-589869" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:14.649652  356384 pod_ready.go:94] pod "kube-apiserver-no-preload-589869" is "Ready"
	I1018 09:43:14.649674  356384 pod_ready.go:86] duration metric: took 3.686973ms for pod "kube-apiserver-no-preload-589869" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:14.651392  356384 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-589869" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:15.035373  356384 pod_ready.go:94] pod "kube-controller-manager-no-preload-589869" is "Ready"
	I1018 09:43:15.035397  356384 pod_ready.go:86] duration metric: took 383.987388ms for pod "kube-controller-manager-no-preload-589869" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:15.235706  356384 pod_ready.go:83] waiting for pod "kube-proxy-45kpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:15.634796  356384 pod_ready.go:94] pod "kube-proxy-45kpn" is "Ready"
	I1018 09:43:15.634842  356384 pod_ready.go:86] duration metric: took 399.105333ms for pod "kube-proxy-45kpn" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:15.835703  356384 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-589869" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:16.234958  356384 pod_ready.go:94] pod "kube-scheduler-no-preload-589869" is "Ready"
	I1018 09:43:16.234990  356384 pod_ready.go:86] duration metric: took 399.258749ms for pod "kube-scheduler-no-preload-589869" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:43:16.235012  356384 pod_ready.go:40] duration metric: took 1.604113026s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:43:16.283710  356384 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:43:16.285503  356384 out.go:179] * Done! kubectl is now configured to use "no-preload-589869" cluster and "default" namespace by default
	I1018 09:43:13.740090  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:13.740444  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:14.239959  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:14.240330  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:14.739981  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:14.740372  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:15.240031  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:15.240361  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:15.739973  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:15.740325  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:16.239979  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:16.240331  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:16.739966  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:16.740364  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:17.240008  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:17.240620  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:17.739247  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:17.739633  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:18.239993  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:18.240381  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:18.739244  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:18.739806  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:19.239372  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:19.239777  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:19.739427  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:19.739873  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:20.239500  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:20.239917  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:20.739224  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:20.739664  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:21.239970  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:21.240340  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:21.739979  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:21.740380  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:22.239961  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:22.240338  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:22.740020  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:22.740392  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:23.240085  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:23.240502  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	
	
	==> CRI-O <==
	Oct 18 09:43:14 no-preload-589869 crio[766]: time="2025-10-18T09:43:14.518414278Z" level=info msg="Starting container: 825da7b10cbea0ce4413b6f6860144ee9cf21ebd99841124ddae983c9d80dcb5" id=c1beeaa5-5ead-4d7b-9abf-e5eebf21b3b7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:43:14 no-preload-589869 crio[766]: time="2025-10-18T09:43:14.520639051Z" level=info msg="Started container" PID=2910 containerID=825da7b10cbea0ce4413b6f6860144ee9cf21ebd99841124ddae983c9d80dcb5 description=kube-system/coredns-66bc5c9577-pck54/coredns id=c1beeaa5-5ead-4d7b-9abf-e5eebf21b3b7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b0a4c7058e3771b09de62111a89b4c775591a3c2f5117ffca9023019028f2659
	Oct 18 09:43:16 no-preload-589869 crio[766]: time="2025-10-18T09:43:16.754137088Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3a5ece4a-9cee-4b11-b20f-bce9227441ca name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:43:16 no-preload-589869 crio[766]: time="2025-10-18T09:43:16.754274473Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:43:16 no-preload-589869 crio[766]: time="2025-10-18T09:43:16.760343069Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f3d21ec584311a7249c8ba6441640b406ef14ebf8a7fc9a0b4c091b9060ce282 UID:51be3b0e-97f1-4abd-863d-5069b9e73230 NetNS:/var/run/netns/c886e02f-e172-40cf-bfc3-32b7fe8db340 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000dc4388}] Aliases:map[]}"
	Oct 18 09:43:16 no-preload-589869 crio[766]: time="2025-10-18T09:43:16.760375404Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:43:16 no-preload-589869 crio[766]: time="2025-10-18T09:43:16.770641773Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f3d21ec584311a7249c8ba6441640b406ef14ebf8a7fc9a0b4c091b9060ce282 UID:51be3b0e-97f1-4abd-863d-5069b9e73230 NetNS:/var/run/netns/c886e02f-e172-40cf-bfc3-32b7fe8db340 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000dc4388}] Aliases:map[]}"
	Oct 18 09:43:16 no-preload-589869 crio[766]: time="2025-10-18T09:43:16.770799933Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 09:43:16 no-preload-589869 crio[766]: time="2025-10-18T09:43:16.771640701Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:43:16 no-preload-589869 crio[766]: time="2025-10-18T09:43:16.772623359Z" level=info msg="Ran pod sandbox f3d21ec584311a7249c8ba6441640b406ef14ebf8a7fc9a0b4c091b9060ce282 with infra container: default/busybox/POD" id=3a5ece4a-9cee-4b11-b20f-bce9227441ca name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:43:16 no-preload-589869 crio[766]: time="2025-10-18T09:43:16.774914822Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=93bca45c-0153-48de-a5f8-b1d172a2ae04 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:43:16 no-preload-589869 crio[766]: time="2025-10-18T09:43:16.775082061Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=93bca45c-0153-48de-a5f8-b1d172a2ae04 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:43:16 no-preload-589869 crio[766]: time="2025-10-18T09:43:16.775641895Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=93bca45c-0153-48de-a5f8-b1d172a2ae04 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:43:16 no-preload-589869 crio[766]: time="2025-10-18T09:43:16.776293762Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2e291ca4-7122-46f6-9d56-6902562026bd name=/runtime.v1.ImageService/PullImage
	Oct 18 09:43:16 no-preload-589869 crio[766]: time="2025-10-18T09:43:16.777885746Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 09:43:18 no-preload-589869 crio[766]: time="2025-10-18T09:43:18.833677052Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=2e291ca4-7122-46f6-9d56-6902562026bd name=/runtime.v1.ImageService/PullImage
	Oct 18 09:43:18 no-preload-589869 crio[766]: time="2025-10-18T09:43:18.834331592Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ee7e795e-5f06-4a61-9a2d-0201b4e27eaf name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:43:18 no-preload-589869 crio[766]: time="2025-10-18T09:43:18.835870387Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=34b9523b-b78d-40d0-93b6-4bd5376c45f4 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:43:18 no-preload-589869 crio[766]: time="2025-10-18T09:43:18.839219226Z" level=info msg="Creating container: default/busybox/busybox" id=0d4c6145-820d-45cf-9462-2af74fedbe9c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:43:18 no-preload-589869 crio[766]: time="2025-10-18T09:43:18.840072461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:43:18 no-preload-589869 crio[766]: time="2025-10-18T09:43:18.843536657Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:43:18 no-preload-589869 crio[766]: time="2025-10-18T09:43:18.844087174Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:43:18 no-preload-589869 crio[766]: time="2025-10-18T09:43:18.868802073Z" level=info msg="Created container d54b754710b6347c2e33f092ff7577b720f504e68887dac54ae4e055a5617b34: default/busybox/busybox" id=0d4c6145-820d-45cf-9462-2af74fedbe9c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:43:18 no-preload-589869 crio[766]: time="2025-10-18T09:43:18.869599607Z" level=info msg="Starting container: d54b754710b6347c2e33f092ff7577b720f504e68887dac54ae4e055a5617b34" id=2f285a38-a9e9-4dc3-bd91-01c1419c5029 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:43:18 no-preload-589869 crio[766]: time="2025-10-18T09:43:18.871754837Z" level=info msg="Started container" PID=2983 containerID=d54b754710b6347c2e33f092ff7577b720f504e68887dac54ae4e055a5617b34 description=default/busybox/busybox id=2f285a38-a9e9-4dc3-bd91-01c1419c5029 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3d21ec584311a7249c8ba6441640b406ef14ebf8a7fc9a0b4c091b9060ce282
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d54b754710b63       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   f3d21ec584311       busybox                                     default
	825da7b10cbea       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   b0a4c7058e377       coredns-66bc5c9577-pck54                    kube-system
	223932e24247f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   fc271ef30ea42       storage-provisioner                         kube-system
	b803b974d8dfb       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   877e3ae42ce45       kindnet-zjqmf                               kube-system
	1679c90ffcd7b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   fecaf15068c6b       kube-proxy-45kpn                            kube-system
	42b5089dd9e09       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   094fb47ce7fa4       kube-scheduler-no-preload-589869            kube-system
	cc09d2acbae30       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   8a3b7bcff187e       etcd-no-preload-589869                      kube-system
	fa7afada4f52d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   351db089993ec       kube-controller-manager-no-preload-589869   kube-system
	01bcd5cf55a98       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   b75b1e1a21977       kube-apiserver-no-preload-589869            kube-system
	
	
	==> coredns [825da7b10cbea0ce4413b6f6860144ee9cf21ebd99841124ddae983c9d80dcb5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40014 - 57269 "HINFO IN 8911970711935637267.4068078301471059263. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.072248339s
	
	
	==> describe nodes <==
	Name:               no-preload-589869
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-589869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=no-preload-589869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_42_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:42:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-589869
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:43:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:43:26 +0000   Sat, 18 Oct 2025 09:42:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:43:26 +0000   Sat, 18 Oct 2025 09:42:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:43:26 +0000   Sat, 18 Oct 2025 09:42:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:43:26 +0000   Sat, 18 Oct 2025 09:43:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-589869
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                6a71982a-ecb5-4a3a-b089-e736cb5f928f
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-pck54                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-589869                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-zjqmf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-589869             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-no-preload-589869    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-45kpn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-589869             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node no-preload-589869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node no-preload-589869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node no-preload-589869 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node no-preload-589869 event: Registered Node no-preload-589869 in Controller
	  Normal  NodeReady                12s   kubelet          Node no-preload-589869 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [cc09d2acbae309fd0b838bf4366ab3e9573fc7da266b235b8d46df963ac39266] <==
	{"level":"warn","ts":"2025-10-18T09:42:53.465015Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.009355ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-589869\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"warn","ts":"2025-10-18T09:42:53.465031Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.805701ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:350"}
	{"level":"info","ts":"2025-10-18T09:42:53.465049Z","caller":"traceutil/trace.go:172","msg":"trace[1427399085] transaction","detail":"{read_only:false; response_revision:12; number_of_response:1; }","duration":"178.01622ms","start":"2025-10-18T09:42:53.287019Z","end":"2025-10-18T09:42:53.465035Z","steps":["trace[1427399085] 'process raft request'  (duration: 177.938989ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:42:53.465074Z","caller":"traceutil/trace.go:172","msg":"trace[254221480] range","detail":"{range_begin:/registry/minions/no-preload-589869; range_end:; response_count:0; response_revision:11; }","duration":"184.081256ms","start":"2025-10-18T09:42:53.280982Z","end":"2025-10-18T09:42:53.465063Z","steps":["trace[254221480] 'agreement among raft nodes before linearized reading'  (duration: 183.978853ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:42:53.465083Z","caller":"traceutil/trace.go:172","msg":"trace[1425539784] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:11; }","duration":"138.887866ms","start":"2025-10-18T09:42:53.326184Z","end":"2025-10-18T09:42:53.465072Z","steps":["trace[1425539784] 'agreement among raft nodes before linearized reading'  (duration: 138.684711ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:42:53.626241Z","caller":"traceutil/trace.go:172","msg":"trace[1110366131] linearizableReadLoop","detail":"{readStateIndex:15; appliedIndex:15; }","duration":"161.379204ms","start":"2025-10-18T09:42:53.464844Z","end":"2025-10-18T09:42:53.626223Z","steps":["trace[1110366131] 'read index received'  (duration: 161.373535ms)","trace[1110366131] 'applied index is now lower than readState.Index'  (duration: 4.636µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:42:53.657480Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"228.346623ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"warn","ts":"2025-10-18T09:42:53.657508Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"227.255194ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-18T09:42:53.657541Z","caller":"traceutil/trace.go:172","msg":"trace[920448423] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:12; }","duration":"228.415148ms","start":"2025-10-18T09:42:53.429110Z","end":"2025-10-18T09:42:53.657526Z","steps":["trace[920448423] 'agreement among raft nodes before linearized reading'  (duration: 197.205742ms)","trace[920448423] 'range keys from in-memory index tree'  (duration: 31.11762ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:42:53.657551Z","caller":"traceutil/trace.go:172","msg":"trace[1569724849] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:12; }","duration":"227.307229ms","start":"2025-10-18T09:42:53.430232Z","end":"2025-10-18T09:42:53.657539Z","steps":["trace[1569724849] 'agreement among raft nodes before linearized reading'  (duration: 196.070905ms)","trace[1569724849] 'range keys from in-memory index tree'  (duration: 31.15965ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:42:53.657631Z","caller":"traceutil/trace.go:172","msg":"trace[1807571259] transaction","detail":"{read_only:false; number_of_response:0; response_revision:12; }","duration":"228.862468ms","start":"2025-10-18T09:42:53.428745Z","end":"2025-10-18T09:42:53.657608Z","steps":["trace[1807571259] 'process raft request'  (duration: 197.564823ms)","trace[1807571259] 'compare'  (duration: 31.153412ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:42:53.657685Z","caller":"traceutil/trace.go:172","msg":"trace[1558630258] transaction","detail":"{read_only:false; response_revision:13; number_of_response:1; }","duration":"228.55888ms","start":"2025-10-18T09:42:53.429113Z","end":"2025-10-18T09:42:53.657672Z","steps":["trace[1558630258] 'process raft request'  (duration: 228.416584ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:42:53.657772Z","caller":"traceutil/trace.go:172","msg":"trace[2054838409] transaction","detail":"{read_only:false; response_revision:16; number_of_response:1; }","duration":"228.486992ms","start":"2025-10-18T09:42:53.429275Z","end":"2025-10-18T09:42:53.657762Z","steps":["trace[2054838409] 'process raft request'  (duration: 228.394122ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:42:53.657814Z","caller":"traceutil/trace.go:172","msg":"trace[432357878] transaction","detail":"{read_only:false; response_revision:18; number_of_response:1; }","duration":"228.507288ms","start":"2025-10-18T09:42:53.429299Z","end":"2025-10-18T09:42:53.657806Z","steps":["trace[432357878] 'process raft request'  (duration: 228.416842ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:42:53.657811Z","caller":"traceutil/trace.go:172","msg":"trace[1906778936] transaction","detail":"{read_only:false; response_revision:20; number_of_response:1; }","duration":"226.52248ms","start":"2025-10-18T09:42:53.431278Z","end":"2025-10-18T09:42:53.657800Z","steps":["trace[1906778936] 'process raft request'  (duration: 226.483383ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:42:53.657867Z","caller":"traceutil/trace.go:172","msg":"trace[2057981514] transaction","detail":"{read_only:false; response_revision:14; number_of_response:1; }","duration":"228.745581ms","start":"2025-10-18T09:42:53.429111Z","end":"2025-10-18T09:42:53.657857Z","steps":["trace[2057981514] 'process raft request'  (duration: 228.491388ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:42:53.657892Z","caller":"traceutil/trace.go:172","msg":"trace[1774683944] transaction","detail":"{read_only:false; response_revision:17; number_of_response:1; }","duration":"228.602027ms","start":"2025-10-18T09:42:53.429284Z","end":"2025-10-18T09:42:53.657886Z","steps":["trace[1774683944] 'process raft request'  (duration: 228.409538ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:42:53.657849Z","caller":"traceutil/trace.go:172","msg":"trace[197353362] transaction","detail":"{read_only:false; response_revision:15; number_of_response:1; }","duration":"228.568511ms","start":"2025-10-18T09:42:53.429250Z","end":"2025-10-18T09:42:53.657818Z","steps":["trace[197353362] 'process raft request'  (duration: 228.391882ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:42:53.657941Z","caller":"traceutil/trace.go:172","msg":"trace[994806679] transaction","detail":"{read_only:false; response_revision:19; number_of_response:1; }","duration":"228.821266ms","start":"2025-10-18T09:42:53.429113Z","end":"2025-10-18T09:42:53.657935Z","steps":["trace[994806679] 'process raft request'  (duration: 228.62089ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:42:53.676907Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.470655ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-18T09:42:53.676969Z","caller":"traceutil/trace.go:172","msg":"trace[1690948604] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:20; }","duration":"110.527343ms","start":"2025-10-18T09:42:53.566419Z","end":"2025-10-18T09:42:53.676946Z","steps":["trace[1690948604] 'agreement among raft nodes before linearized reading'  (duration: 110.442742ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:42:53.677008Z","caller":"traceutil/trace.go:172","msg":"trace[544622602] transaction","detail":"{read_only:false; response_revision:22; number_of_response:1; }","duration":"205.80648ms","start":"2025-10-18T09:42:53.471192Z","end":"2025-10-18T09:42:53.676998Z","steps":["trace[544622602] 'process raft request'  (duration: 205.761627ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-18T09:42:53.676907Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.785574ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/no-preload-589869\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-18T09:42:53.677034Z","caller":"traceutil/trace.go:172","msg":"trace[472874647] transaction","detail":"{read_only:false; response_revision:21; number_of_response:1; }","duration":"210.297403ms","start":"2025-10-18T09:42:53.466722Z","end":"2025-10-18T09:42:53.677020Z","steps":["trace[472874647] 'process raft request'  (duration: 210.153683ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:42:53.677056Z","caller":"traceutil/trace.go:172","msg":"trace[914635031] range","detail":"{range_begin:/registry/csinodes/no-preload-589869; range_end:; response_count:0; response_revision:20; }","duration":"154.944956ms","start":"2025-10-18T09:42:53.522100Z","end":"2025-10-18T09:42:53.677045Z","steps":["trace[914635031] 'agreement among raft nodes before linearized reading'  (duration: 154.757477ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:43:26 up  1:25,  0 user,  load average: 3.71, 3.09, 1.80
	Linux no-preload-589869 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b803b974d8dfb984f2e2f62bb95cfb13cfc4435f7eae197af229389528e5df44] <==
	I1018 09:43:03.452055       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:43:03.452289       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1018 09:43:03.452406       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:43:03.452419       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:43:03.452445       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:43:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:43:03.653884       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:43:03.653914       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:43:03.653935       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:43:03.654533       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:43:04.154733       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:43:04.154758       1 metrics.go:72] Registering metrics
	I1018 09:43:04.154854       1 controller.go:711] "Syncing nftables rules"
	I1018 09:43:13.654890       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:43:13.654962       1 main.go:301] handling current node
	I1018 09:43:23.656921       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:43:23.656963       1 main.go:301] handling current node
	
	
	==> kube-apiserver [01bcd5cf55a989b43ad55d059cec81d24d962e1b29de396f6ecfe1a42e70f2d2] <==
	I1018 09:42:53.427117       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:42:53.427258       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1018 09:42:53.428121       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1018 09:42:53.659414       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:42:53.659665       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:42:53.721679       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:42:54.166427       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:42:54.169987       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:42:54.170003       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:42:54.581926       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:42:54.615551       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:42:54.734894       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:42:54.740400       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1018 09:42:54.741521       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:42:54.745276       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:42:54.839549       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:42:55.708722       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:42:55.719890       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:42:55.726799       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:43:00.243409       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:43:00.246797       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:43:00.693603       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 09:43:00.693602       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 09:43:00.891114       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1018 09:43:25.536932       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:35564: use of closed network connection
	
	
	==> kube-controller-manager [fa7afada4f52dcce16513f295624b55bd7fbfc6e2512791514ed5b026105b781] <==
	I1018 09:42:59.838469       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:42:59.838575       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:42:59.838659       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-589869"
	I1018 09:42:59.838709       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:42:59.838722       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 09:42:59.838929       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:42:59.838957       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 09:42:59.839118       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:42:59.839146       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:42:59.839216       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:42:59.839639       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:42:59.839661       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:42:59.839769       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:42:59.839768       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:42:59.842137       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:42:59.844413       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 09:42:59.844479       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:42:59.844507       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:42:59.844514       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:42:59.844520       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:42:59.846639       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:42:59.847729       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:42:59.850850       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-589869" podCIDRs=["10.244.0.0/24"]
	I1018 09:42:59.857845       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:43:14.841194       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1679c90ffcd7b6b8a42c805ab6528988f503c546093b3e07fad2a1538c96ce82] <==
	I1018 09:43:01.106510       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:43:01.173998       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:43:01.278340       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:43:01.278383       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1018 09:43:01.278484       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:43:01.301602       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:43:01.301652       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:43:01.308562       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:43:01.309041       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:43:01.309130       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:43:01.311175       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:43:01.311188       1 config.go:200] "Starting service config controller"
	I1018 09:43:01.311203       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:43:01.311207       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:43:01.311240       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:43:01.311247       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:43:01.312234       1 config.go:309] "Starting node config controller"
	I1018 09:43:01.312535       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:43:01.312579       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:43:01.411607       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:43:01.411794       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:43:01.412107       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [42b5089dd9e09908c70d723ff55449db05ebc0db68dccc70f71e646f5bbf5830] <==
	E1018 09:42:53.234350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:42:53.234382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:42:53.234471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:42:53.234481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:42:53.234563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:42:53.234564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:42:53.234567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:42:53.234612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:42:53.234731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:42:53.234779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:42:53.234794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:42:54.046726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:42:54.047371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:42:54.100989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:42:54.162395       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:42:54.210491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:42:54.213577       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:42:54.238957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:42:54.260149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 09:42:54.276324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:42:54.340022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:42:54.410356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:42:54.422369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:42:54.427382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1018 09:42:57.130076       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:42:56 no-preload-589869 kubelet[2304]: I1018 09:42:56.595808    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-589869" podStartSLOduration=1.595792736 podStartE2EDuration="1.595792736s" podCreationTimestamp="2025-10-18 09:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:42:56.587579914 +0000 UTC m=+1.130135854" watchObservedRunningTime="2025-10-18 09:42:56.595792736 +0000 UTC m=+1.138348657"
	Oct 18 09:42:56 no-preload-589869 kubelet[2304]: I1018 09:42:56.595989    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-589869" podStartSLOduration=1.595978722 podStartE2EDuration="1.595978722s" podCreationTimestamp="2025-10-18 09:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:42:56.595891925 +0000 UTC m=+1.138447858" watchObservedRunningTime="2025-10-18 09:42:56.595978722 +0000 UTC m=+1.138534662"
	Oct 18 09:42:56 no-preload-589869 kubelet[2304]: I1018 09:42:56.614192    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-589869" podStartSLOduration=1.614171821 podStartE2EDuration="1.614171821s" podCreationTimestamp="2025-10-18 09:42:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:42:56.604660032 +0000 UTC m=+1.147215970" watchObservedRunningTime="2025-10-18 09:42:56.614171821 +0000 UTC m=+1.156727761"
	Oct 18 09:42:59 no-preload-589869 kubelet[2304]: I1018 09:42:59.950197    2304 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 09:42:59 no-preload-589869 kubelet[2304]: I1018 09:42:59.950744    2304 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 09:43:00 no-preload-589869 kubelet[2304]: I1018 09:43:00.762209    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rkwx\" (UniqueName: \"kubernetes.io/projected/1f457398-f624-4d8b-bb01-66d9f3a15033-kube-api-access-4rkwx\") pod \"kube-proxy-45kpn\" (UID: \"1f457398-f624-4d8b-bb01-66d9f3a15033\") " pod="kube-system/kube-proxy-45kpn"
	Oct 18 09:43:00 no-preload-589869 kubelet[2304]: I1018 09:43:00.762262    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f9912369-31bd-48e1-b05e-e623a8b4e541-cni-cfg\") pod \"kindnet-zjqmf\" (UID: \"f9912369-31bd-48e1-b05e-e623a8b4e541\") " pod="kube-system/kindnet-zjqmf"
	Oct 18 09:43:00 no-preload-589869 kubelet[2304]: I1018 09:43:00.762288    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1f457398-f624-4d8b-bb01-66d9f3a15033-xtables-lock\") pod \"kube-proxy-45kpn\" (UID: \"1f457398-f624-4d8b-bb01-66d9f3a15033\") " pod="kube-system/kube-proxy-45kpn"
	Oct 18 09:43:00 no-preload-589869 kubelet[2304]: I1018 09:43:00.762310    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9912369-31bd-48e1-b05e-e623a8b4e541-lib-modules\") pod \"kindnet-zjqmf\" (UID: \"f9912369-31bd-48e1-b05e-e623a8b4e541\") " pod="kube-system/kindnet-zjqmf"
	Oct 18 09:43:00 no-preload-589869 kubelet[2304]: I1018 09:43:00.762329    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1f457398-f624-4d8b-bb01-66d9f3a15033-lib-modules\") pod \"kube-proxy-45kpn\" (UID: \"1f457398-f624-4d8b-bb01-66d9f3a15033\") " pod="kube-system/kube-proxy-45kpn"
	Oct 18 09:43:00 no-preload-589869 kubelet[2304]: I1018 09:43:00.762350    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9912369-31bd-48e1-b05e-e623a8b4e541-xtables-lock\") pod \"kindnet-zjqmf\" (UID: \"f9912369-31bd-48e1-b05e-e623a8b4e541\") " pod="kube-system/kindnet-zjqmf"
	Oct 18 09:43:00 no-preload-589869 kubelet[2304]: I1018 09:43:00.762370    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vssxp\" (UniqueName: \"kubernetes.io/projected/f9912369-31bd-48e1-b05e-e623a8b4e541-kube-api-access-vssxp\") pod \"kindnet-zjqmf\" (UID: \"f9912369-31bd-48e1-b05e-e623a8b4e541\") " pod="kube-system/kindnet-zjqmf"
	Oct 18 09:43:00 no-preload-589869 kubelet[2304]: I1018 09:43:00.762425    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1f457398-f624-4d8b-bb01-66d9f3a15033-kube-proxy\") pod \"kube-proxy-45kpn\" (UID: \"1f457398-f624-4d8b-bb01-66d9f3a15033\") " pod="kube-system/kube-proxy-45kpn"
	Oct 18 09:43:01 no-preload-589869 kubelet[2304]: I1018 09:43:01.572950    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-45kpn" podStartSLOduration=1.572931582 podStartE2EDuration="1.572931582s" podCreationTimestamp="2025-10-18 09:43:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:43:01.572768023 +0000 UTC m=+6.115323962" watchObservedRunningTime="2025-10-18 09:43:01.572931582 +0000 UTC m=+6.115487521"
	Oct 18 09:43:04 no-preload-589869 kubelet[2304]: I1018 09:43:04.378938    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-zjqmf" podStartSLOduration=2.153515614 podStartE2EDuration="4.378916947s" podCreationTimestamp="2025-10-18 09:43:00 +0000 UTC" firstStartedPulling="2025-10-18 09:43:01.023453745 +0000 UTC m=+5.566009664" lastFinishedPulling="2025-10-18 09:43:03.248855065 +0000 UTC m=+7.791410997" observedRunningTime="2025-10-18 09:43:03.579947894 +0000 UTC m=+8.122503832" watchObservedRunningTime="2025-10-18 09:43:04.378916947 +0000 UTC m=+8.921472883"
	Oct 18 09:43:14 no-preload-589869 kubelet[2304]: I1018 09:43:14.139736    2304 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 09:43:14 no-preload-589869 kubelet[2304]: I1018 09:43:14.262269    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/602e29ab-ecfb-4629-a801-28c32d870d4a-config-volume\") pod \"coredns-66bc5c9577-pck54\" (UID: \"602e29ab-ecfb-4629-a801-28c32d870d4a\") " pod="kube-system/coredns-66bc5c9577-pck54"
	Oct 18 09:43:14 no-preload-589869 kubelet[2304]: I1018 09:43:14.262321    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49r56\" (UniqueName: \"kubernetes.io/projected/602e29ab-ecfb-4629-a801-28c32d870d4a-kube-api-access-49r56\") pod \"coredns-66bc5c9577-pck54\" (UID: \"602e29ab-ecfb-4629-a801-28c32d870d4a\") " pod="kube-system/coredns-66bc5c9577-pck54"
	Oct 18 09:43:14 no-preload-589869 kubelet[2304]: I1018 09:43:14.262356    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9c851a2c-8320-45ae-9c2f-3f60bc0401c8-tmp\") pod \"storage-provisioner\" (UID: \"9c851a2c-8320-45ae-9c2f-3f60bc0401c8\") " pod="kube-system/storage-provisioner"
	Oct 18 09:43:14 no-preload-589869 kubelet[2304]: I1018 09:43:14.262396    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgpnr\" (UniqueName: \"kubernetes.io/projected/9c851a2c-8320-45ae-9c2f-3f60bc0401c8-kube-api-access-bgpnr\") pod \"storage-provisioner\" (UID: \"9c851a2c-8320-45ae-9c2f-3f60bc0401c8\") " pod="kube-system/storage-provisioner"
	Oct 18 09:43:14 no-preload-589869 kubelet[2304]: I1018 09:43:14.604287    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pck54" podStartSLOduration=13.604267283 podStartE2EDuration="13.604267283s" podCreationTimestamp="2025-10-18 09:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:43:14.604185204 +0000 UTC m=+19.146741142" watchObservedRunningTime="2025-10-18 09:43:14.604267283 +0000 UTC m=+19.146823222"
	Oct 18 09:43:14 no-preload-589869 kubelet[2304]: I1018 09:43:14.624086    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.624066455 podStartE2EDuration="13.624066455s" podCreationTimestamp="2025-10-18 09:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:43:14.623742178 +0000 UTC m=+19.166298117" watchObservedRunningTime="2025-10-18 09:43:14.624066455 +0000 UTC m=+19.166622394"
	Oct 18 09:43:16 no-preload-589869 kubelet[2304]: I1018 09:43:16.475481    2304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz2q5\" (UniqueName: \"kubernetes.io/projected/51be3b0e-97f1-4abd-863d-5069b9e73230-kube-api-access-vz2q5\") pod \"busybox\" (UID: \"51be3b0e-97f1-4abd-863d-5069b9e73230\") " pod="default/busybox"
	Oct 18 09:43:19 no-preload-589869 kubelet[2304]: I1018 09:43:19.617186    2304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.5578495399999999 podStartE2EDuration="3.617168908s" podCreationTimestamp="2025-10-18 09:43:16 +0000 UTC" firstStartedPulling="2025-10-18 09:43:16.775905523 +0000 UTC m=+21.318461453" lastFinishedPulling="2025-10-18 09:43:18.835224903 +0000 UTC m=+23.377780821" observedRunningTime="2025-10-18 09:43:19.617157534 +0000 UTC m=+24.159713471" watchObservedRunningTime="2025-10-18 09:43:19.617168908 +0000 UTC m=+24.159724846"
	Oct 18 09:43:25 no-preload-589869 kubelet[2304]: E1018 09:43:25.536850    2304 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:44500->127.0.0.1:38237: write tcp 127.0.0.1:44500->127.0.0.1:38237: write: broken pipe
	
	
	==> storage-provisioner [223932e24247fbfad4305a6034f91d6e02c6d20dc52ed9ec3a7c4b37637ddb89] <==
	I1018 09:43:14.525335       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:43:14.534023       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:43:14.534074       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:43:14.536368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:43:14.540385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:43:14.540595       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:43:14.540746       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-589869_ce26d266-6399-4b5e-aa29-13964d6948ab!
	I1018 09:43:14.540746       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ffa4ca64-af5f-429e-8808-12f7378aafdf", APIVersion:"v1", ResourceVersion:"409", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-589869_ce26d266-6399-4b5e-aa29-13964d6948ab became leader
	W1018 09:43:14.542772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:43:14.546212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:43:14.641216       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-589869_ce26d266-6399-4b5e-aa29-13964d6948ab!
	W1018 09:43:16.549244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:43:16.553236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:43:18.556115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:43:18.560622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:43:20.564142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:43:20.568213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:43:22.571189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:43:22.575970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:43:24.578687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:43:24.582148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:43:26.585747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:43:26.589438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-589869 -n no-preload-589869
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-589869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-619885 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-619885 --alsologtostderr -v=1: exit status 80 (2.472880015s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-619885 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:44:14.243205  371367 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:44:14.243301  371367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:44:14.243308  371367 out.go:374] Setting ErrFile to fd 2...
	I1018 09:44:14.243312  371367 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:44:14.243508  371367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:44:14.243746  371367 out.go:368] Setting JSON to false
	I1018 09:44:14.243788  371367 mustload.go:65] Loading cluster: old-k8s-version-619885
	I1018 09:44:14.244137  371367 config.go:182] Loaded profile config "old-k8s-version-619885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1018 09:44:14.244531  371367 cli_runner.go:164] Run: docker container inspect old-k8s-version-619885 --format={{.State.Status}}
	I1018 09:44:14.263619  371367 host.go:66] Checking if "old-k8s-version-619885" exists ...
	I1018 09:44:14.263976  371367 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:44:14.328289  371367 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-18 09:44:14.317775983 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:44:14.329025  371367 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-619885 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:44:14.330787  371367 out.go:179] * Pausing node old-k8s-version-619885 ... 
	I1018 09:44:14.332729  371367 host.go:66] Checking if "old-k8s-version-619885" exists ...
	I1018 09:44:14.333061  371367 ssh_runner.go:195] Run: systemctl --version
	I1018 09:44:14.333102  371367 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-619885
	I1018 09:44:14.352108  371367 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33191 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/old-k8s-version-619885/id_rsa Username:docker}
	I1018 09:44:14.450316  371367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:44:14.463778  371367 pause.go:52] kubelet running: true
	I1018 09:44:14.463864  371367 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:44:14.639908  371367 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:44:14.640025  371367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:44:14.711960  371367 cri.go:89] found id: "5b9a25d7ca89e5a5f227c89e4c65fda1f57fea58ab2f00baf53866e09b9a19ea"
	I1018 09:44:14.711986  371367 cri.go:89] found id: "3d71415e5d23f091c256ec69cb6bd08bff295fdc3222434e5978054f55cd858a"
	I1018 09:44:14.711993  371367 cri.go:89] found id: "868ad4152848f6a63e0415e5c6a4814b9c54f45ecbc00e5d458c6b9cedd73597"
	I1018 09:44:14.712000  371367 cri.go:89] found id: "2d9de25ec275f7a26f89e18a6bf459fac123effa83d7ee72e4855d9b3bd71070"
	I1018 09:44:14.712005  371367 cri.go:89] found id: "9bac4afda2cd6a56903403041cc289b1df6e5601dec28bc97ecdf4758352ef1f"
	I1018 09:44:14.712011  371367 cri.go:89] found id: "7fe7bf854b17230485448f3f9edffbf8256278410beebb814098460ced51012a"
	I1018 09:44:14.712015  371367 cri.go:89] found id: "fdfeb0ddcbc9e81818edeaac2428def9a1bd1e558ad4e23f0d8f6775b7f2c5b9"
	I1018 09:44:14.712021  371367 cri.go:89] found id: "9dea26c3889d8fcde9ef123c494d3c45546f1760d8a72398c746eda2f2f6395b"
	I1018 09:44:14.712024  371367 cri.go:89] found id: "c46ec81af1bdf64d24ba9e436aeaa90b9063672e95d2002dd2a2ea63c5994da3"
	I1018 09:44:14.712033  371367 cri.go:89] found id: "23f28b00046885ed3722a8258f6cd92c80978d95a4a0c82c1094e9b69cf27e37"
	I1018 09:44:14.712037  371367 cri.go:89] found id: "2d6a72283c35fffb748de47518ddeea3904e292dbab05a98cbc4f1cc59c4ba64"
	I1018 09:44:14.712042  371367 cri.go:89] found id: ""
	I1018 09:44:14.712091  371367 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:44:14.723756  371367 retry.go:31] will retry after 216.143347ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:44:14Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:44:14.940241  371367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:44:14.953236  371367 pause.go:52] kubelet running: false
	I1018 09:44:14.953306  371367 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:44:15.095486  371367 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:44:15.095582  371367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:44:15.164791  371367 cri.go:89] found id: "5b9a25d7ca89e5a5f227c89e4c65fda1f57fea58ab2f00baf53866e09b9a19ea"
	I1018 09:44:15.164832  371367 cri.go:89] found id: "3d71415e5d23f091c256ec69cb6bd08bff295fdc3222434e5978054f55cd858a"
	I1018 09:44:15.164839  371367 cri.go:89] found id: "868ad4152848f6a63e0415e5c6a4814b9c54f45ecbc00e5d458c6b9cedd73597"
	I1018 09:44:15.164844  371367 cri.go:89] found id: "2d9de25ec275f7a26f89e18a6bf459fac123effa83d7ee72e4855d9b3bd71070"
	I1018 09:44:15.164848  371367 cri.go:89] found id: "9bac4afda2cd6a56903403041cc289b1df6e5601dec28bc97ecdf4758352ef1f"
	I1018 09:44:15.164853  371367 cri.go:89] found id: "7fe7bf854b17230485448f3f9edffbf8256278410beebb814098460ced51012a"
	I1018 09:44:15.164858  371367 cri.go:89] found id: "fdfeb0ddcbc9e81818edeaac2428def9a1bd1e558ad4e23f0d8f6775b7f2c5b9"
	I1018 09:44:15.164862  371367 cri.go:89] found id: "9dea26c3889d8fcde9ef123c494d3c45546f1760d8a72398c746eda2f2f6395b"
	I1018 09:44:15.164867  371367 cri.go:89] found id: "c46ec81af1bdf64d24ba9e436aeaa90b9063672e95d2002dd2a2ea63c5994da3"
	I1018 09:44:15.164875  371367 cri.go:89] found id: "23f28b00046885ed3722a8258f6cd92c80978d95a4a0c82c1094e9b69cf27e37"
	I1018 09:44:15.164879  371367 cri.go:89] found id: "2d6a72283c35fffb748de47518ddeea3904e292dbab05a98cbc4f1cc59c4ba64"
	I1018 09:44:15.164884  371367 cri.go:89] found id: ""
	I1018 09:44:15.164934  371367 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:44:15.176922  371367 retry.go:31] will retry after 351.471854ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:44:15Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:44:15.529519  371367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:44:15.542769  371367 pause.go:52] kubelet running: false
	I1018 09:44:15.542842  371367 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:44:15.678653  371367 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:44:15.678740  371367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:44:15.749713  371367 cri.go:89] found id: "5b9a25d7ca89e5a5f227c89e4c65fda1f57fea58ab2f00baf53866e09b9a19ea"
	I1018 09:44:15.749735  371367 cri.go:89] found id: "3d71415e5d23f091c256ec69cb6bd08bff295fdc3222434e5978054f55cd858a"
	I1018 09:44:15.749741  371367 cri.go:89] found id: "868ad4152848f6a63e0415e5c6a4814b9c54f45ecbc00e5d458c6b9cedd73597"
	I1018 09:44:15.749746  371367 cri.go:89] found id: "2d9de25ec275f7a26f89e18a6bf459fac123effa83d7ee72e4855d9b3bd71070"
	I1018 09:44:15.749751  371367 cri.go:89] found id: "9bac4afda2cd6a56903403041cc289b1df6e5601dec28bc97ecdf4758352ef1f"
	I1018 09:44:15.749756  371367 cri.go:89] found id: "7fe7bf854b17230485448f3f9edffbf8256278410beebb814098460ced51012a"
	I1018 09:44:15.749761  371367 cri.go:89] found id: "fdfeb0ddcbc9e81818edeaac2428def9a1bd1e558ad4e23f0d8f6775b7f2c5b9"
	I1018 09:44:15.749765  371367 cri.go:89] found id: "9dea26c3889d8fcde9ef123c494d3c45546f1760d8a72398c746eda2f2f6395b"
	I1018 09:44:15.749769  371367 cri.go:89] found id: "c46ec81af1bdf64d24ba9e436aeaa90b9063672e95d2002dd2a2ea63c5994da3"
	I1018 09:44:15.749777  371367 cri.go:89] found id: "23f28b00046885ed3722a8258f6cd92c80978d95a4a0c82c1094e9b69cf27e37"
	I1018 09:44:15.749781  371367 cri.go:89] found id: "2d6a72283c35fffb748de47518ddeea3904e292dbab05a98cbc4f1cc59c4ba64"
	I1018 09:44:15.749784  371367 cri.go:89] found id: ""
	I1018 09:44:15.749835  371367 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:44:15.761939  371367 retry.go:31] will retry after 658.4453ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:44:15Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:44:16.420763  371367 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:44:16.434104  371367 pause.go:52] kubelet running: false
	I1018 09:44:16.434162  371367 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:44:16.578957  371367 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:44:16.579028  371367 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:44:16.645996  371367 cri.go:89] found id: "5b9a25d7ca89e5a5f227c89e4c65fda1f57fea58ab2f00baf53866e09b9a19ea"
	I1018 09:44:16.646051  371367 cri.go:89] found id: "3d71415e5d23f091c256ec69cb6bd08bff295fdc3222434e5978054f55cd858a"
	I1018 09:44:16.646060  371367 cri.go:89] found id: "868ad4152848f6a63e0415e5c6a4814b9c54f45ecbc00e5d458c6b9cedd73597"
	I1018 09:44:16.646066  371367 cri.go:89] found id: "2d9de25ec275f7a26f89e18a6bf459fac123effa83d7ee72e4855d9b3bd71070"
	I1018 09:44:16.646071  371367 cri.go:89] found id: "9bac4afda2cd6a56903403041cc289b1df6e5601dec28bc97ecdf4758352ef1f"
	I1018 09:44:16.646077  371367 cri.go:89] found id: "7fe7bf854b17230485448f3f9edffbf8256278410beebb814098460ced51012a"
	I1018 09:44:16.646081  371367 cri.go:89] found id: "fdfeb0ddcbc9e81818edeaac2428def9a1bd1e558ad4e23f0d8f6775b7f2c5b9"
	I1018 09:44:16.646085  371367 cri.go:89] found id: "9dea26c3889d8fcde9ef123c494d3c45546f1760d8a72398c746eda2f2f6395b"
	I1018 09:44:16.646089  371367 cri.go:89] found id: "c46ec81af1bdf64d24ba9e436aeaa90b9063672e95d2002dd2a2ea63c5994da3"
	I1018 09:44:16.646097  371367 cri.go:89] found id: "23f28b00046885ed3722a8258f6cd92c80978d95a4a0c82c1094e9b69cf27e37"
	I1018 09:44:16.646101  371367 cri.go:89] found id: "2d6a72283c35fffb748de47518ddeea3904e292dbab05a98cbc4f1cc59c4ba64"
	I1018 09:44:16.646105  371367 cri.go:89] found id: ""
	I1018 09:44:16.646159  371367 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:44:16.660460  371367 out.go:203] 
	W1018 09:44:16.661680  371367 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:44:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:44:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:44:16.661700  371367 out.go:285] * 
	* 
	W1018 09:44:16.665659  371367 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:44:16.666694  371367 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-619885 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-619885
helpers_test.go:243: (dbg) docker inspect old-k8s-version-619885:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191",
	        "Created": "2025-10-18T09:42:17.27822051Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 364774,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:43:33.815850746Z",
	            "FinishedAt": "2025-10-18T09:43:33.019788086Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191/hosts",
	        "LogPath": "/var/lib/docker/containers/1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191/1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191-json.log",
	        "Name": "/old-k8s-version-619885",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-619885:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-619885",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191",
	                "LowerDir": "/var/lib/docker/overlay2/e897cdf36c8a11bd13de0e7fe8917a83f0cd612a7600cb4b1650393d6766bf33-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e897cdf36c8a11bd13de0e7fe8917a83f0cd612a7600cb4b1650393d6766bf33/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e897cdf36c8a11bd13de0e7fe8917a83f0cd612a7600cb4b1650393d6766bf33/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e897cdf36c8a11bd13de0e7fe8917a83f0cd612a7600cb4b1650393d6766bf33/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-619885",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-619885/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-619885",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-619885",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-619885",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b7f5d056d403b4312f8e4d5df0917c98c1d0d6970ae9a0ad0d8374b29dbc1b3",
	            "SandboxKey": "/var/run/docker/netns/8b7f5d056d40",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33191"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33192"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33195"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33193"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33194"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-619885": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:ab:ca:79:f3:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f172a0295669142d53ec5906c89946014e1c53fe54e9e8bba2fffa329bff8586",
	                    "EndpointID": "321df6cce21b4d40cedb28e419b6b1828be8af7d5372958373eda7681745fcda",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-619885",
	                        "1ed6b6e47d49"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-619885 -n old-k8s-version-619885
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-619885 -n old-k8s-version-619885: exit status 2 (312.142992ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-619885 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-619885 logs -n 25: (1.125371615s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ pause   │ -p pause-238319 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-238319              │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ delete  │ -p pause-238319                                                                                                                                                                                                                               │ pause-238319              │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p cert-options-310417 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:42 UTC │
	│ start   │ -p missing-upgrade-631894 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-631894    │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:42 UTC │
	│ ssh     │ force-systemd-flag-565668 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-565668 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ delete  │ -p force-systemd-flag-565668                                                                                                                                                                                                                  │ force-systemd-flag-565668 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:42 UTC │
	│ ssh     │ cert-options-310417 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ ssh     │ -p cert-options-310417 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ delete  │ -p cert-options-310417                                                                                                                                                                                                                        │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ stop    │ -p kubernetes-upgrade-919613                                                                                                                                                                                                                  │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ start   │ -p old-k8s-version-619885 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │                     │
	│ delete  │ -p missing-upgrade-631894                                                                                                                                                                                                                     │ missing-upgrade-631894    │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ start   │ -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-619885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	│ stop    │ -p old-k8s-version-619885 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-589869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	│ stop    │ -p no-preload-589869 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-619885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p old-k8s-version-619885 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:44 UTC │
	│ addons  │ enable dashboard -p no-preload-589869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	│ image   │ old-k8s-version-619885 image list --format=json                                                                                                                                                                                               │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ pause   │ -p old-k8s-version-619885 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:43:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:43:45.888389  366919 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:43:45.888659  366919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:43:45.888668  366919 out.go:374] Setting ErrFile to fd 2...
	I1018 09:43:45.888672  366919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:43:45.888914  366919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:43:45.889335  366919 out.go:368] Setting JSON to false
	I1018 09:43:45.890614  366919 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5170,"bootTime":1760775456,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:43:45.890707  366919 start.go:141] virtualization: kvm guest
	I1018 09:43:45.892590  366919 out.go:179] * [no-preload-589869] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:43:45.893663  366919 notify.go:220] Checking for updates...
	I1018 09:43:45.893672  366919 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:43:45.894765  366919 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:43:45.895898  366919 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:43:45.897118  366919 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:43:45.898213  366919 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:43:45.899245  366919 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:43:45.900700  366919 config.go:182] Loaded profile config "no-preload-589869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:43:45.901184  366919 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:43:45.924781  366919 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:43:45.924886  366919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:43:45.981626  366919 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 09:43:45.971756736 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:43:45.981735  366919 docker.go:318] overlay module found
	I1018 09:43:45.983342  366919 out.go:179] * Using the docker driver based on existing profile
	I1018 09:43:45.984469  366919 start.go:305] selected driver: docker
	I1018 09:43:45.984486  366919 start.go:925] validating driver "docker" against &{Name:no-preload-589869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:43:45.984565  366919 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:43:45.985110  366919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:43:46.037775  366919 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 09:43:46.028344169 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:43:46.038191  366919 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:43:46.038224  366919 cni.go:84] Creating CNI manager for ""
	I1018 09:43:46.038282  366919 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:43:46.038328  366919 start.go:349] cluster config:
	{Name:no-preload-589869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:43:46.040386  366919 out.go:179] * Starting "no-preload-589869" primary control-plane node in "no-preload-589869" cluster
	I1018 09:43:46.041380  366919 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:43:46.042560  366919 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:43:46.043522  366919 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:43:46.043617  366919 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:43:46.043675  366919 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/config.json ...
	I1018 09:43:46.043842  366919 cache.go:107] acquiring lock: {Name:mk8d380524b774b5edadec7411def9ea12a01591 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.043848  366919 cache.go:107] acquiring lock: {Name:mka49eac321c9a155354693a3a6be91b02cdc4a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.043918  366919 cache.go:107] acquiring lock: {Name:mka2dd49281e4623d770ed33d958b114b7cc789f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.043868  366919 cache.go:107] acquiring lock: {Name:mk3d292d197011122be585423e2f701ad4e9ea53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.043929  366919 cache.go:107] acquiring lock: {Name:mk2f4cf60554cd9991205940f1aa9911f9bb383a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.043985  366919 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 09:43:46.043957  366919 cache.go:107] acquiring lock: {Name:mka90deb6de3b7e19386c6d0f0785fc3e96d2e63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.043995  366919 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 09:43:46.043996  366919 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 09:43:46.043996  366919 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 78.503µs
	I1018 09:43:46.044005  366919 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 89.399µs
	I1018 09:43:46.044007  366919 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 156.418µs
	I1018 09:43:46.043987  366919 cache.go:107] acquiring lock: {Name:mk9ad0aa9744bfc6007683a43233309af99e2ada Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.044018  366919 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 09:43:46.044018  366919 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 09:43:46.044012  366919 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 09:43:46.043998  366919 cache.go:107] acquiring lock: {Name:mk61b8919142cd1b35d71e72ba258fc114b79f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.044047  366919 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 246.637µs
	I1018 09:43:46.044055  366919 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 09:43:46.044104  366919 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 09:43:46.043985  366919 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 09:43:46.044129  366919 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 223.377µs
	I1018 09:43:46.044138  366919 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 09:43:46.044143  366919 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 339.93µs
	I1018 09:43:46.044150  366919 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 09:43:46.044158  366919 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 09:43:46.044019  366919 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 09:43:46.044160  366919 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1018 09:43:46.044200  366919 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 259.875µs
	I1018 09:43:46.044212  366919 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 09:43:46.044165  366919 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 222.054µs
	I1018 09:43:46.044220  366919 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 09:43:46.044229  366919 cache.go:87] Successfully saved all images to host disk.
	I1018 09:43:46.066081  366919 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:43:46.066101  366919 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:43:46.066116  366919 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:43:46.066137  366919 start.go:360] acquireMachinesLock for no-preload-589869: {Name:mk63da8322dd3ab3d8f833b8b716fde137314571 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.066187  366919 start.go:364] duration metric: took 35.579µs to acquireMachinesLock for "no-preload-589869"
	I1018 09:43:46.066204  366919 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:43:46.066212  366919 fix.go:54] fixHost starting: 
	I1018 09:43:46.066405  366919 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:46.083586  366919 fix.go:112] recreateIfNeeded on no-preload-589869: state=Stopped err=<nil>
	W1018 09:43:46.083616  366919 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:43:44.053069  364574 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:43:44.059054  364574 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:43:44.060491  364574 api_server.go:141] control plane version: v1.28.0
	I1018 09:43:44.060514  364574 api_server.go:131] duration metric: took 507.720119ms to wait for apiserver health ...
	I1018 09:43:44.060523  364574 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:43:44.064165  364574 system_pods.go:59] 8 kube-system pods found
	I1018 09:43:44.064203  364574 system_pods.go:61] "coredns-5dd5756b68-wklp4" [666ccb81-9bb0-4ee0-8fe1-8d060091f9b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:44.064216  364574 system_pods.go:61] "etcd-old-k8s-version-619885" [dca6ec98-a949-42a4-9c2a-bc2a1a60f5c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:43:44.064228  364574 system_pods.go:61] "kindnet-vpnhf" [4dadafc2-f316-4101-b535-142210628ad3] Running
	I1018 09:43:44.064239  364574 system_pods.go:61] "kube-apiserver-old-k8s-version-619885" [09a3e16e-e8bd-4c4f-9605-65df6c74f6df] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:43:44.064249  364574 system_pods.go:61] "kube-controller-manager-old-k8s-version-619885" [07f81e84-301e-4a8b-9b8e-3c04e4325a0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:43:44.064255  364574 system_pods.go:61] "kube-proxy-spkr8" [74de2fd0-602e-4deb-942b-b2d6236b4472] Running
	I1018 09:43:44.064263  364574 system_pods.go:61] "kube-scheduler-old-k8s-version-619885" [688ca738-cfaf-4503-90db-35314c08ac6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:43:44.064272  364574 system_pods.go:61] "storage-provisioner" [398d98bd-a962-40a6-ba34-a3d0a5ea35ca] Running
	I1018 09:43:44.064280  364574 system_pods.go:74] duration metric: took 3.752222ms to wait for pod list to return data ...
	I1018 09:43:44.064293  364574 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:43:44.066175  364574 default_sa.go:45] found service account: "default"
	I1018 09:43:44.066192  364574 default_sa.go:55] duration metric: took 1.892091ms for default service account to be created ...
	I1018 09:43:44.066200  364574 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:43:44.069244  364574 system_pods.go:86] 8 kube-system pods found
	I1018 09:43:44.069272  364574 system_pods.go:89] "coredns-5dd5756b68-wklp4" [666ccb81-9bb0-4ee0-8fe1-8d060091f9b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:44.069283  364574 system_pods.go:89] "etcd-old-k8s-version-619885" [dca6ec98-a949-42a4-9c2a-bc2a1a60f5c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:43:44.069295  364574 system_pods.go:89] "kindnet-vpnhf" [4dadafc2-f316-4101-b535-142210628ad3] Running
	I1018 09:43:44.069305  364574 system_pods.go:89] "kube-apiserver-old-k8s-version-619885" [09a3e16e-e8bd-4c4f-9605-65df6c74f6df] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:43:44.069311  364574 system_pods.go:89] "kube-controller-manager-old-k8s-version-619885" [07f81e84-301e-4a8b-9b8e-3c04e4325a0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:43:44.069321  364574 system_pods.go:89] "kube-proxy-spkr8" [74de2fd0-602e-4deb-942b-b2d6236b4472] Running
	I1018 09:43:44.069329  364574 system_pods.go:89] "kube-scheduler-old-k8s-version-619885" [688ca738-cfaf-4503-90db-35314c08ac6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:43:44.069337  364574 system_pods.go:89] "storage-provisioner" [398d98bd-a962-40a6-ba34-a3d0a5ea35ca] Running
	I1018 09:43:44.069351  364574 system_pods.go:126] duration metric: took 3.145847ms to wait for k8s-apps to be running ...
	I1018 09:43:44.069363  364574 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:43:44.069414  364574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:43:44.082213  364574 system_svc.go:56] duration metric: took 12.842491ms WaitForService to wait for kubelet
	I1018 09:43:44.082235  364574 kubeadm.go:586] duration metric: took 3.64667679s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:43:44.082253  364574 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:43:44.084683  364574 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:43:44.084708  364574 node_conditions.go:123] node cpu capacity is 8
	I1018 09:43:44.084721  364574 node_conditions.go:105] duration metric: took 2.464173ms to run NodePressure ...
	I1018 09:43:44.084734  364574 start.go:241] waiting for startup goroutines ...
	I1018 09:43:44.084743  364574 start.go:246] waiting for cluster config update ...
	I1018 09:43:44.084758  364574 start.go:255] writing updated cluster config ...
	I1018 09:43:44.085061  364574 ssh_runner.go:195] Run: rm -f paused
	I1018 09:43:44.088911  364574 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:43:44.093426  364574 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-wklp4" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:43:46.101089  364574 pod_ready.go:104] pod "coredns-5dd5756b68-wklp4" is not "Ready", error: node "old-k8s-version-619885" hosting pod "coredns-5dd5756b68-wklp4" is not "Ready" (will retry)
	I1018 09:43:46.224301  353123 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.054994591s)
	W1018 09:43:46.224348  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1018 09:43:46.224359  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:43:46.224376  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:46.259157  353123 logs.go:123] Gathering logs for kube-apiserver [bf3494d5bef6582646cb9fd62b020501a179e17ec4ae80baaa34921388e4d2c9] ...
	I1018 09:43:46.259190  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf3494d5bef6582646cb9fd62b020501a179e17ec4ae80baaa34921388e4d2c9"
	I1018 09:43:46.292558  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:43:46.292596  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:43:46.339561  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:43:46.339652  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:43:46.086041  366919 out.go:252] * Restarting existing docker container for "no-preload-589869" ...
	I1018 09:43:46.086128  366919 cli_runner.go:164] Run: docker start no-preload-589869
	I1018 09:43:46.330566  366919 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:46.349925  366919 kic.go:430] container "no-preload-589869" state is running.
	I1018 09:43:46.350504  366919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-589869
	I1018 09:43:46.369484  366919 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/config.json ...
	I1018 09:43:46.369785  366919 machine.go:93] provisionDockerMachine start ...
	I1018 09:43:46.369895  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:46.389951  366919 main.go:141] libmachine: Using SSH client type: native
	I1018 09:43:46.390197  366919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33196 <nil> <nil>}
	I1018 09:43:46.390213  366919 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:43:46.390886  366919 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54370->127.0.0.1:33196: read: connection reset by peer
	I1018 09:43:49.528799  366919 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-589869
	
	I1018 09:43:49.528852  366919 ubuntu.go:182] provisioning hostname "no-preload-589869"
	I1018 09:43:49.528927  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:49.546576  366919 main.go:141] libmachine: Using SSH client type: native
	I1018 09:43:49.546787  366919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33196 <nil> <nil>}
	I1018 09:43:49.546801  366919 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-589869 && echo "no-preload-589869" | sudo tee /etc/hostname
	I1018 09:43:49.689515  366919 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-589869
	
	I1018 09:43:49.689617  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:49.707758  366919 main.go:141] libmachine: Using SSH client type: native
	I1018 09:43:49.708074  366919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33196 <nil> <nil>}
	I1018 09:43:49.708102  366919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-589869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-589869/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-589869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:43:49.841538  366919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:43:49.841582  366919 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:43:49.841607  366919 ubuntu.go:190] setting up certificates
	I1018 09:43:49.841619  366919 provision.go:84] configureAuth start
	I1018 09:43:49.841677  366919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-589869
	I1018 09:43:49.860015  366919 provision.go:143] copyHostCerts
	I1018 09:43:49.860089  366919 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:43:49.860108  366919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:43:49.860195  366919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:43:49.860343  366919 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:43:49.860357  366919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:43:49.860401  366919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:43:49.860495  366919 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:43:49.860506  366919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:43:49.860545  366919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:43:49.860628  366919 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.no-preload-589869 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-589869]
	I1018 09:43:50.148919  366919 provision.go:177] copyRemoteCerts
	I1018 09:43:50.148980  366919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:43:50.149021  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:50.166754  366919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:50.263430  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:43:50.281417  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:43:50.298517  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:43:50.315182  366919 provision.go:87] duration metric: took 473.546028ms to configureAuth
	I1018 09:43:50.315208  366919 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:43:50.315369  366919 config.go:182] Loaded profile config "no-preload-589869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:43:50.315472  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:50.332788  366919 main.go:141] libmachine: Using SSH client type: native
	I1018 09:43:50.333021  366919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33196 <nil> <nil>}
	I1018 09:43:50.333040  366919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:43:50.619905  366919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:43:50.619938  366919 machine.go:96] duration metric: took 4.250134197s to provisionDockerMachine
	I1018 09:43:50.619954  366919 start.go:293] postStartSetup for "no-preload-589869" (driver="docker")
	I1018 09:43:50.619967  366919 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:43:50.620044  366919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:43:50.620100  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:50.638702  366919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:50.737639  366919 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:43:50.741946  366919 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:43:50.741983  366919 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:43:50.741998  366919 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:43:50.742054  366919 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:43:50.742158  366919 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:43:50.742279  366919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:43:50.751096  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:43:50.772874  366919 start.go:296] duration metric: took 152.899929ms for postStartSetup
	I1018 09:43:50.772967  366919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:43:50.773015  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:50.793737  366919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:50.890997  366919 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:43:50.896274  366919 fix.go:56] duration metric: took 4.830055131s for fixHost
	I1018 09:43:50.896298  366919 start.go:83] releasing machines lock for "no-preload-589869", held for 4.830101526s
	I1018 09:43:50.896361  366919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-589869
	I1018 09:43:50.914781  366919 ssh_runner.go:195] Run: cat /version.json
	I1018 09:43:50.914850  366919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:43:50.914857  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:50.914918  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:50.935573  366919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:50.936201  366919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:51.083096  366919 ssh_runner.go:195] Run: systemctl --version
	I1018 09:43:51.089737  366919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:43:51.124844  366919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:43:51.129934  366919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:43:51.130009  366919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:43:51.137935  366919 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:43:51.137955  366919 start.go:495] detecting cgroup driver to use...
	I1018 09:43:51.137984  366919 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:43:51.138017  366919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:43:51.152013  366919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:43:51.163809  366919 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:43:51.163894  366919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:43:51.178003  366919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:43:51.189980  366919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:43:51.271384  366919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:43:51.351239  366919 docker.go:234] disabling docker service ...
	I1018 09:43:51.351297  366919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:43:51.365328  366919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:43:51.377371  366919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:43:51.457431  366919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:43:51.538995  366919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:43:51.551761  366919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:43:51.565859  366919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:43:51.565923  366919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:43:51.574929  366919 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:43:51.574983  366919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:43:51.583790  366919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:43:51.592540  366919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:43:51.602194  366919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:43:51.610539  366919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:43:51.619364  366919 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:43:51.627930  366919 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:43:51.637014  366919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:43:51.644621  366919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:43:51.652954  366919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:43:51.732572  366919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:43:51.846566  366919 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:43:51.846634  366919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:43:51.850912  366919 start.go:563] Will wait 60s for crictl version
	I1018 09:43:51.850994  366919 ssh_runner.go:195] Run: which crictl
	I1018 09:43:51.855054  366919 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:43:51.879985  366919 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:43:51.880070  366919 ssh_runner.go:195] Run: crio --version
	I1018 09:43:51.908200  366919 ssh_runner.go:195] Run: crio --version
	I1018 09:43:51.937497  366919 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:43:51.938545  366919 cli_runner.go:164] Run: docker network inspect no-preload-589869 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:43:51.956396  366919 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 09:43:51.960716  366919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:43:51.971090  366919 kubeadm.go:883] updating cluster {Name:no-preload-589869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:43:51.971196  366919 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:43:51.971246  366919 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:43:52.006147  366919 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:43:52.006169  366919 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:43:52.006176  366919 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1018 09:43:52.006263  366919 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-589869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:43:52.006320  366919 ssh_runner.go:195] Run: crio config
	I1018 09:43:52.055879  366919 cni.go:84] Creating CNI manager for ""
	I1018 09:43:52.055905  366919 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:43:52.055926  366919 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:43:52.055955  366919 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-589869 NodeName:no-preload-589869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:43:52.056121  366919 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-589869"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:43:52.056185  366919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:43:52.066083  366919 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:43:52.066166  366919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:43:52.074400  366919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 09:43:52.087017  366919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:43:52.100911  366919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1018 09:43:52.114448  366919 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:43:52.118293  366919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:43:52.128564  366919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:43:52.216446  366919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:43:52.244372  366919 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869 for IP: 192.168.94.2
	I1018 09:43:52.244396  366919 certs.go:195] generating shared ca certs ...
	I1018 09:43:52.244411  366919 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:43:52.244575  366919 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:43:52.244647  366919 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:43:52.244662  366919 certs.go:257] generating profile certs ...
	I1018 09:43:52.244781  366919 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.key
	I1018 09:43:52.244891  366919 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.key.3d5af95d
	I1018 09:43:52.244955  366919 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.key
	I1018 09:43:52.245131  366919 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:43:52.245169  366919 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:43:52.245184  366919 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:43:52.245219  366919 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:43:52.245259  366919 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:43:52.245293  366919 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:43:52.245346  366919 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:43:52.246039  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:43:52.266549  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:43:52.285975  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:43:52.307060  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:43:52.331182  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:43:52.352215  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:43:52.370064  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:43:52.387403  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:43:52.405029  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:43:52.423936  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:43:52.444459  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:43:52.463403  366919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:43:52.476687  366919 ssh_runner.go:195] Run: openssl version
	I1018 09:43:52.484000  366919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:43:52.492608  366919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:43:52.496517  366919 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:43:52.496584  366919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:43:52.535913  366919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:43:52.544148  366919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:43:52.552802  366919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:43:52.556563  366919 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:43:52.556626  366919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:43:52.594448  366919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:43:52.604150  366919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:43:52.614424  366919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:43:52.618359  366919 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:43:52.618412  366919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:43:52.654528  366919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:43:52.663177  366919 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:43:52.667296  366919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:43:52.701931  366919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:43:52.738849  366919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:43:52.786090  366919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:43:52.832096  366919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:43:52.886538  366919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:43:52.930366  366919 kubeadm.go:400] StartCluster: {Name:no-preload-589869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:43:52.930448  366919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:43:52.930513  366919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:43:52.960751  366919 cri.go:89] found id: "8ea25fde146e8e96a504e7f7eaa6b8b321ddf9a82560cd19db582cffc49f48b2"
	I1018 09:43:52.960776  366919 cri.go:89] found id: "e90a7d734d6758358dd228647088e66ec6aa6cfda7ad58d83ee3410a00ea8756"
	I1018 09:43:52.960782  366919 cri.go:89] found id: "3021ebf25ee25e7930a9131ff2cdf54e1638a03d0f603c0186e607f5bf6ea827"
	I1018 09:43:52.960786  366919 cri.go:89] found id: "365f44dae4ed2d0a509b24fe9019127a3b886f7165b236fedf613caabda9e161"
	I1018 09:43:52.960790  366919 cri.go:89] found id: ""
	I1018 09:43:52.960849  366919 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:43:52.973139  366919 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:43:52Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:43:52.973210  366919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:43:52.982376  366919 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:43:52.982400  366919 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:43:52.982452  366919 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:43:52.989996  366919 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:43:52.990697  366919 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-589869" does not appear in /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:43:52.991284  366919 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-131066/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-589869" cluster setting kubeconfig missing "no-preload-589869" context setting]
	I1018 09:43:52.992029  366919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:43:52.993466  366919 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:43:53.001092  366919 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1018 09:43:53.001124  366919 kubeadm.go:601] duration metric: took 18.716954ms to restartPrimaryControlPlane
	I1018 09:43:53.001134  366919 kubeadm.go:402] duration metric: took 70.776761ms to StartCluster
	I1018 09:43:53.001153  366919 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:43:53.001219  366919 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:43:53.002442  366919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:43:53.002663  366919 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:43:53.002721  366919 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:43:53.002856  366919 addons.go:69] Setting storage-provisioner=true in profile "no-preload-589869"
	I1018 09:43:53.002882  366919 addons.go:238] Setting addon storage-provisioner=true in "no-preload-589869"
	W1018 09:43:53.002894  366919 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:43:53.002887  366919 addons.go:69] Setting dashboard=true in profile "no-preload-589869"
	I1018 09:43:53.002901  366919 addons.go:69] Setting default-storageclass=true in profile "no-preload-589869"
	I1018 09:43:53.002923  366919 host.go:66] Checking if "no-preload-589869" exists ...
	I1018 09:43:53.002926  366919 config.go:182] Loaded profile config "no-preload-589869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:43:53.002939  366919 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-589869"
	I1018 09:43:53.002918  366919 addons.go:238] Setting addon dashboard=true in "no-preload-589869"
	W1018 09:43:53.002984  366919 addons.go:247] addon dashboard should already be in state true
	I1018 09:43:53.003005  366919 host.go:66] Checking if "no-preload-589869" exists ...
	I1018 09:43:53.003292  366919 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:53.003407  366919 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:53.003414  366919 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:53.005729  366919 out.go:179] * Verifying Kubernetes components...
	I1018 09:43:53.008648  366919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:43:53.029317  366919 addons.go:238] Setting addon default-storageclass=true in "no-preload-589869"
	W1018 09:43:53.029345  366919 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:43:53.029377  366919 host.go:66] Checking if "no-preload-589869" exists ...
	I1018 09:43:53.029962  366919 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:53.032926  366919 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:43:53.032997  366919 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:43:53.033816  366919 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:43:53.033856  366919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:43:53.033920  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:53.035675  366919 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1018 09:43:48.598741  364574 pod_ready.go:104] pod "coredns-5dd5756b68-wklp4" is not "Ready", error: node "old-k8s-version-619885" hosting pod "coredns-5dd5756b68-wklp4" is not "Ready" (will retry)
	W1018 09:43:50.599471  364574 pod_ready.go:104] pod "coredns-5dd5756b68-wklp4" is not "Ready", error: node "old-k8s-version-619885" hosting pod "coredns-5dd5756b68-wklp4" is not "Ready" (will retry)
	W1018 09:43:52.599855  364574 pod_ready.go:104] pod "coredns-5dd5756b68-wklp4" is not "Ready", error: node "old-k8s-version-619885" hosting pod "coredns-5dd5756b68-wklp4" is not "Ready" (will retry)
	I1018 09:43:48.875056  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:48.875508  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:48.875558  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:43:48.875611  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:43:48.902616  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:48.902648  353123 cri.go:89] found id: "bf3494d5bef6582646cb9fd62b020501a179e17ec4ae80baaa34921388e4d2c9"
	I1018 09:43:48.902654  353123 cri.go:89] found id: ""
	I1018 09:43:48.902663  353123 logs.go:282] 2 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a bf3494d5bef6582646cb9fd62b020501a179e17ec4ae80baaa34921388e4d2c9]
	I1018 09:43:48.902720  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:48.906791  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:48.910428  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:43:48.910483  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:43:48.937054  353123 cri.go:89] found id: ""
	I1018 09:43:48.937077  353123 logs.go:282] 0 containers: []
	W1018 09:43:48.937087  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:43:48.937094  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:43:48.937156  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:43:48.963525  353123 cri.go:89] found id: ""
	I1018 09:43:48.963551  353123 logs.go:282] 0 containers: []
	W1018 09:43:48.963563  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:43:48.963571  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:43:48.963637  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:43:48.989538  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:43:48.989558  353123 cri.go:89] found id: ""
	I1018 09:43:48.989577  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:43:48.989637  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:48.993391  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:43:48.993471  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:43:49.019697  353123 cri.go:89] found id: ""
	I1018 09:43:49.019720  353123 logs.go:282] 0 containers: []
	W1018 09:43:49.019728  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:43:49.019734  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:43:49.019793  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:43:49.045695  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:43:49.045715  353123 cri.go:89] found id: "dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18"
	I1018 09:43:49.045720  353123 cri.go:89] found id: ""
	I1018 09:43:49.045727  353123 logs.go:282] 2 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7 dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18]
	I1018 09:43:49.045773  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:49.049954  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:49.053609  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:43:49.053671  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:43:49.079477  353123 cri.go:89] found id: ""
	I1018 09:43:49.079504  353123 logs.go:282] 0 containers: []
	W1018 09:43:49.079515  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:43:49.079522  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:43:49.079569  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:43:49.106640  353123 cri.go:89] found id: ""
	I1018 09:43:49.106663  353123 logs.go:282] 0 containers: []
	W1018 09:43:49.106670  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:43:49.106685  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:43:49.106695  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:43:49.168459  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:43:49.168493  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:43:49.223579  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:43:49.223611  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:43:49.223627  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:49.255960  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:43:49.255988  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:43:49.297850  353123 logs.go:123] Gathering logs for kube-controller-manager [dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18] ...
	I1018 09:43:49.297879  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18"
	I1018 09:43:49.325174  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:43:49.325204  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:43:49.343369  353123 logs.go:123] Gathering logs for kube-apiserver [bf3494d5bef6582646cb9fd62b020501a179e17ec4ae80baaa34921388e4d2c9] ...
	I1018 09:43:49.343399  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf3494d5bef6582646cb9fd62b020501a179e17ec4ae80baaa34921388e4d2c9"
	I1018 09:43:49.374840  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:43:49.374873  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:43:49.401680  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:43:49.401705  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:43:49.452420  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:43:49.452452  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:43:51.983883  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:51.984247  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:51.984295  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:43:51.984342  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:43:52.013861  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:52.013885  353123 cri.go:89] found id: ""
	I1018 09:43:52.013895  353123 logs.go:282] 1 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a]
	I1018 09:43:52.013972  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:52.018080  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:43:52.018145  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:43:52.046611  353123 cri.go:89] found id: ""
	I1018 09:43:52.046640  353123 logs.go:282] 0 containers: []
	W1018 09:43:52.046651  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:43:52.046659  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:43:52.046715  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:43:52.076555  353123 cri.go:89] found id: ""
	I1018 09:43:52.076577  353123 logs.go:282] 0 containers: []
	W1018 09:43:52.076584  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:43:52.076590  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:43:52.076654  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:43:52.104654  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:43:52.104676  353123 cri.go:89] found id: ""
	I1018 09:43:52.104684  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:43:52.104742  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:52.108491  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:43:52.108546  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:43:52.135425  353123 cri.go:89] found id: ""
	I1018 09:43:52.135453  353123 logs.go:282] 0 containers: []
	W1018 09:43:52.135463  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:43:52.135471  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:43:52.135525  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:43:52.167838  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:43:52.167873  353123 cri.go:89] found id: "dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18"
	I1018 09:43:52.167880  353123 cri.go:89] found id: ""
	I1018 09:43:52.167893  353123 logs.go:282] 2 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7 dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18]
	I1018 09:43:52.167961  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:52.172042  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:52.175551  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:43:52.175639  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:43:52.202083  353123 cri.go:89] found id: ""
	I1018 09:43:52.202111  353123 logs.go:282] 0 containers: []
	W1018 09:43:52.202121  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:43:52.202129  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:43:52.202188  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:43:52.229647  353123 cri.go:89] found id: ""
	I1018 09:43:52.229679  353123 logs.go:282] 0 containers: []
	W1018 09:43:52.229698  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:43:52.229717  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:43:52.229733  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:52.266887  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:43:52.266924  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:43:52.301334  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:43:52.301368  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:43:52.385415  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:43:52.385451  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:43:52.406129  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:43:52.406164  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:43:52.453657  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:43:52.453695  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:43:52.482385  353123 logs.go:123] Gathering logs for kube-controller-manager [dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18] ...
	I1018 09:43:52.482407  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18"
	I1018 09:43:52.509024  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:43:52.509052  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:43:52.554701  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:43:52.554725  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:43:52.614373  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:43:53.036676  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:43:53.036699  366919 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:43:53.036791  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:53.063982  366919 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:43:53.064007  366919 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:43:53.064086  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:53.071330  366919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:53.073249  366919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:53.093297  366919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:53.158365  366919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:43:53.170898  366919 node_ready.go:35] waiting up to 6m0s for node "no-preload-589869" to be "Ready" ...
	I1018 09:43:53.185397  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:43:53.185445  366919 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:43:53.186368  366919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:43:53.199966  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:43:53.199990  366919 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:43:53.204302  366919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:43:53.215788  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:43:53.215813  366919 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:43:53.231349  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:43:53.231375  366919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:43:53.250124  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:43:53.250150  366919 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:43:53.267779  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:43:53.267812  366919 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:43:53.281749  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:43:53.281778  366919 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:43:53.294674  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:43:53.294701  366919 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:43:53.307138  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:43:53.307164  366919 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:43:53.319525  366919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:43:54.972861  366919 node_ready.go:49] node "no-preload-589869" is "Ready"
	I1018 09:43:54.972895  366919 node_ready.go:38] duration metric: took 1.801958248s for node "no-preload-589869" to be "Ready" ...
	I1018 09:43:54.972914  366919 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:43:54.972971  366919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:43:55.563137  366919 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.376728205s)
	I1018 09:43:55.563243  366919 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.358900953s)
	I1018 09:43:55.563345  366919 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.243780483s)
	I1018 09:43:55.563367  366919 api_server.go:72] duration metric: took 2.560676225s to wait for apiserver process to appear ...
	I1018 09:43:55.563381  366919 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:43:55.563409  366919 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 09:43:55.565046  366919 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-589869 addons enable metrics-server
	
	I1018 09:43:55.568344  366919 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:43:55.568376  366919 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:43:55.570502  366919 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 09:43:55.573912  366919 addons.go:514] duration metric: took 2.571190944s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1018 09:43:55.099487  364574 pod_ready.go:104] pod "coredns-5dd5756b68-wklp4" is not "Ready", error: <nil>
	W1018 09:43:57.600137  364574 pod_ready.go:104] pod "coredns-5dd5756b68-wklp4" is not "Ready", error: <nil>
	I1018 09:43:55.114551  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:55.115085  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:55.115143  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:43:55.115192  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:43:55.161219  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:55.161247  353123 cri.go:89] found id: ""
	I1018 09:43:55.161257  353123 logs.go:282] 1 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a]
	I1018 09:43:55.161324  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:55.170224  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:43:55.170305  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:43:55.200890  353123 cri.go:89] found id: ""
	I1018 09:43:55.200918  353123 logs.go:282] 0 containers: []
	W1018 09:43:55.200928  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:43:55.200935  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:43:55.200980  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:43:55.229782  353123 cri.go:89] found id: ""
	I1018 09:43:55.229812  353123 logs.go:282] 0 containers: []
	W1018 09:43:55.229842  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:43:55.229850  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:43:55.229912  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:43:55.261593  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:43:55.261618  353123 cri.go:89] found id: ""
	I1018 09:43:55.261629  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:43:55.261690  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:55.266623  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:43:55.266698  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:43:55.298111  353123 cri.go:89] found id: ""
	I1018 09:43:55.298141  353123 logs.go:282] 0 containers: []
	W1018 09:43:55.298151  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:43:55.298160  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:43:55.298226  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:43:55.331797  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:43:55.331830  353123 cri.go:89] found id: "dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18"
	I1018 09:43:55.331837  353123 cri.go:89] found id: ""
	I1018 09:43:55.331846  353123 logs.go:282] 2 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7 dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18]
	I1018 09:43:55.331924  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:55.337277  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:55.342655  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:43:55.342725  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:43:55.381359  353123 cri.go:89] found id: ""
	I1018 09:43:55.381577  353123 logs.go:282] 0 containers: []
	W1018 09:43:55.381598  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:43:55.381609  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:43:55.381768  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:43:55.426587  353123 cri.go:89] found id: ""
	I1018 09:43:55.426617  353123 logs.go:282] 0 containers: []
	W1018 09:43:55.426628  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:43:55.426696  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:43:55.426715  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:43:55.516649  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:43:55.516686  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:43:55.537369  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:43:55.537405  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:43:55.602370  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:43:55.602388  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:43:55.602404  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:43:55.630263  353123 logs.go:123] Gathering logs for kube-controller-manager [dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18] ...
	I1018 09:43:55.630305  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18"
	I1018 09:43:55.674395  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:43:55.674438  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:55.727344  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:43:55.727436  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:43:55.783819  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:43:55.783881  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:43:55.833768  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:43:55.833806  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:43:58.367561  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:58.368111  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:58.368174  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:43:58.368237  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:43:58.404599  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:58.404624  353123 cri.go:89] found id: ""
	I1018 09:43:58.404635  353123 logs.go:282] 1 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a]
	I1018 09:43:58.404700  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:58.409553  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:43:58.409635  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:43:58.444727  353123 cri.go:89] found id: ""
	I1018 09:43:58.444757  353123 logs.go:282] 0 containers: []
	W1018 09:43:58.444769  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:43:58.444779  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:43:58.444877  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:43:58.477665  353123 cri.go:89] found id: ""
	I1018 09:43:58.477687  353123 logs.go:282] 0 containers: []
	W1018 09:43:58.477695  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:43:58.477702  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:43:58.477748  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:43:58.506961  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:43:58.506987  353123 cri.go:89] found id: ""
	I1018 09:43:58.506998  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:43:58.507061  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:58.512256  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:43:58.512331  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:43:58.545142  353123 cri.go:89] found id: ""
	I1018 09:43:58.545172  353123 logs.go:282] 0 containers: []
	W1018 09:43:58.545183  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:43:58.545191  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:43:58.545258  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:43:58.578900  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:43:58.578928  353123 cri.go:89] found id: "dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18"
	I1018 09:43:58.578934  353123 cri.go:89] found id: ""
	I1018 09:43:58.578944  353123 logs.go:282] 2 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7 dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18]
	I1018 09:43:58.579006  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:58.584607  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:58.590709  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:43:58.590932  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:43:58.627132  353123 cri.go:89] found id: ""
	I1018 09:43:58.627158  353123 logs.go:282] 0 containers: []
	W1018 09:43:58.627168  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:43:58.627176  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:43:58.627234  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:43:56.063808  366919 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 09:43:56.069236  366919 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:43:56.069266  366919 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:43:56.563889  366919 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 09:43:56.568121  366919 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1018 09:43:56.569093  366919 api_server.go:141] control plane version: v1.34.1
	I1018 09:43:56.569119  366919 api_server.go:131] duration metric: took 1.005724823s to wait for apiserver health ...
	I1018 09:43:56.569128  366919 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:43:56.572026  366919 system_pods.go:59] 8 kube-system pods found
	I1018 09:43:56.572057  366919 system_pods.go:61] "coredns-66bc5c9577-pck54" [602e29ab-ecfb-4629-a801-28c32d870d4a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:56.572067  366919 system_pods.go:61] "etcd-no-preload-589869" [4d5dfb31-d876-4b94-92b6-119124511a9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:43:56.572075  366919 system_pods.go:61] "kindnet-zjqmf" [f9912369-31bd-48e1-b05e-e623a8b4e541] Running
	I1018 09:43:56.572084  366919 system_pods.go:61] "kube-apiserver-no-preload-589869" [2584bf4b-0c8f-41a7-bc9b-06cb402dc7cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:43:56.572091  366919 system_pods.go:61] "kube-controller-manager-no-preload-589869" [52f102ff-416e-4a0f-9ba4-60fca43d533e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:43:56.572098  366919 system_pods.go:61] "kube-proxy-45kpn" [1f457398-f624-4d8b-bb01-66d9f3a15033] Running
	I1018 09:43:56.572106  366919 system_pods.go:61] "kube-scheduler-no-preload-589869" [60a71bc7-82e8-4028-98db-d34384b00875] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:43:56.572115  366919 system_pods.go:61] "storage-provisioner" [9c851a2c-8320-45ae-9c2f-3f60bc0401c8] Running
	I1018 09:43:56.572123  366919 system_pods.go:74] duration metric: took 2.98957ms to wait for pod list to return data ...
	I1018 09:43:56.572134  366919 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:43:56.574301  366919 default_sa.go:45] found service account: "default"
	I1018 09:43:56.574318  366919 default_sa.go:55] duration metric: took 2.177253ms for default service account to be created ...
	I1018 09:43:56.574325  366919 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:43:56.577061  366919 system_pods.go:86] 8 kube-system pods found
	I1018 09:43:56.577086  366919 system_pods.go:89] "coredns-66bc5c9577-pck54" [602e29ab-ecfb-4629-a801-28c32d870d4a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:56.577093  366919 system_pods.go:89] "etcd-no-preload-589869" [4d5dfb31-d876-4b94-92b6-119124511a9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:43:56.577108  366919 system_pods.go:89] "kindnet-zjqmf" [f9912369-31bd-48e1-b05e-e623a8b4e541] Running
	I1018 09:43:56.577117  366919 system_pods.go:89] "kube-apiserver-no-preload-589869" [2584bf4b-0c8f-41a7-bc9b-06cb402dc7cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:43:56.577128  366919 system_pods.go:89] "kube-controller-manager-no-preload-589869" [52f102ff-416e-4a0f-9ba4-60fca43d533e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:43:56.577134  366919 system_pods.go:89] "kube-proxy-45kpn" [1f457398-f624-4d8b-bb01-66d9f3a15033] Running
	I1018 09:43:56.577140  366919 system_pods.go:89] "kube-scheduler-no-preload-589869" [60a71bc7-82e8-4028-98db-d34384b00875] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:43:56.577145  366919 system_pods.go:89] "storage-provisioner" [9c851a2c-8320-45ae-9c2f-3f60bc0401c8] Running
	I1018 09:43:56.577151  366919 system_pods.go:126] duration metric: took 2.821656ms to wait for k8s-apps to be running ...
	I1018 09:43:56.577160  366919 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:43:56.577201  366919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:43:56.590429  366919 system_svc.go:56] duration metric: took 13.258132ms WaitForService to wait for kubelet
	I1018 09:43:56.590454  366919 kubeadm.go:586] duration metric: took 3.587767635s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:43:56.590470  366919 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:43:56.593275  366919 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:43:56.593309  366919 node_conditions.go:123] node cpu capacity is 8
	I1018 09:43:56.593327  366919 node_conditions.go:105] duration metric: took 2.852019ms to run NodePressure ...
	I1018 09:43:56.593344  366919 start.go:241] waiting for startup goroutines ...
	I1018 09:43:56.593358  366919 start.go:246] waiting for cluster config update ...
	I1018 09:43:56.593376  366919 start.go:255] writing updated cluster config ...
	I1018 09:43:56.593687  366919 ssh_runner.go:195] Run: rm -f paused
	I1018 09:43:56.597912  366919 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:43:56.601361  366919 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pck54" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:43:58.609872  366919 pod_ready.go:104] pod "coredns-66bc5c9577-pck54" is not "Ready", error: <nil>
	W1018 09:44:00.610618  366919 pod_ready.go:104] pod "coredns-66bc5c9577-pck54" is not "Ready", error: <nil>
	W1018 09:43:59.600964  364574 pod_ready.go:104] pod "coredns-5dd5756b68-wklp4" is not "Ready", error: <nil>
	I1018 09:44:01.101049  364574 pod_ready.go:94] pod "coredns-5dd5756b68-wklp4" is "Ready"
	I1018 09:44:01.101082  364574 pod_ready.go:86] duration metric: took 17.007633554s for pod "coredns-5dd5756b68-wklp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:01.105018  364574 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:01.111217  364574 pod_ready.go:94] pod "etcd-old-k8s-version-619885" is "Ready"
	I1018 09:44:01.111243  364574 pod_ready.go:86] duration metric: took 6.201206ms for pod "etcd-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:01.117066  364574 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:01.122981  364574 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-619885" is "Ready"
	I1018 09:44:01.123002  364574 pod_ready.go:86] duration metric: took 5.915488ms for pod "kube-apiserver-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:01.126752  364574 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:01.298102  364574 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-619885" is "Ready"
	I1018 09:44:01.298145  364574 pod_ready.go:86] duration metric: took 171.370267ms for pod "kube-controller-manager-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:01.498818  364574 pod_ready.go:83] waiting for pod "kube-proxy-spkr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:01.898035  364574 pod_ready.go:94] pod "kube-proxy-spkr8" is "Ready"
	I1018 09:44:01.898066  364574 pod_ready.go:86] duration metric: took 399.178015ms for pod "kube-proxy-spkr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:02.098403  364574 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:02.496992  364574 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-619885" is "Ready"
	I1018 09:44:02.497018  364574 pod_ready.go:86] duration metric: took 398.590697ms for pod "kube-scheduler-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:02.497030  364574 pod_ready.go:40] duration metric: took 18.40808647s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:44:02.546419  364574 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1018 09:44:02.551194  364574 out.go:203] 
	W1018 09:44:02.552350  364574 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 09:44:02.553351  364574 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 09:44:02.554373  364574 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-619885" cluster and "default" namespace by default
	I1018 09:43:58.663618  353123 cri.go:89] found id: ""
	I1018 09:43:58.663648  353123 logs.go:282] 0 containers: []
	W1018 09:43:58.663659  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:43:58.663715  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:43:58.663738  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:43:58.739942  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:43:58.739966  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:43:58.739982  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:58.783522  353123 logs.go:123] Gathering logs for kube-controller-manager [dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18] ...
	I1018 09:43:58.783569  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18"
	I1018 09:43:58.821427  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:43:58.821460  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:43:58.926128  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:43:58.926216  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:43:58.955326  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:43:58.955412  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:43:59.018958  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:43:59.019007  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:43:59.054651  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:43:59.054684  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:43:59.118884  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:43:59.118927  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:01.659487  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:01.659919  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:01.659991  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:01.660080  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:01.694753  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:01.694779  353123 cri.go:89] found id: ""
	I1018 09:44:01.694789  353123 logs.go:282] 1 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a]
	I1018 09:44:01.694885  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:01.700222  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:01.700310  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:01.737639  353123 cri.go:89] found id: ""
	I1018 09:44:01.737666  353123 logs.go:282] 0 containers: []
	W1018 09:44:01.737676  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:01.737683  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:01.737744  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:01.771464  353123 cri.go:89] found id: ""
	I1018 09:44:01.771495  353123 logs.go:282] 0 containers: []
	W1018 09:44:01.771507  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:01.771515  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:01.771601  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:01.808752  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:01.808783  353123 cri.go:89] found id: ""
	I1018 09:44:01.808796  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:01.808895  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:01.813969  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:01.814051  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:01.850775  353123 cri.go:89] found id: ""
	I1018 09:44:01.850811  353123 logs.go:282] 0 containers: []
	W1018 09:44:01.850838  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:01.850847  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:01.850918  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:01.886907  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:01.886933  353123 cri.go:89] found id: ""
	I1018 09:44:01.886944  353123 logs.go:282] 1 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7]
	I1018 09:44:01.887011  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:01.891964  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:01.892033  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:01.926001  353123 cri.go:89] found id: ""
	I1018 09:44:01.926029  353123 logs.go:282] 0 containers: []
	W1018 09:44:01.926053  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:01.926061  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:01.926285  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:01.965174  353123 cri.go:89] found id: ""
	I1018 09:44:01.965205  353123 logs.go:282] 0 containers: []
	W1018 09:44:01.965216  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:01.965227  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:01.965242  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:02.028887  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:02.028924  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:02.067308  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:02.067361  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:02.171934  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:02.171971  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:02.198336  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:02.198372  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:02.268275  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:02.268297  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:44:02.268316  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:02.304755  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:02.304789  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:02.357657  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:44:02.357692  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	W1018 09:44:03.108126  366919 pod_ready.go:104] pod "coredns-66bc5c9577-pck54" is not "Ready", error: <nil>
	W1018 09:44:05.108347  366919 pod_ready.go:104] pod "coredns-66bc5c9577-pck54" is not "Ready", error: <nil>
	I1018 09:44:04.887479  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:04.887937  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:04.888003  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:04.888064  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:04.924175  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:04.924199  353123 cri.go:89] found id: ""
	I1018 09:44:04.924210  353123 logs.go:282] 1 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a]
	I1018 09:44:04.924268  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:04.929146  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:04.929224  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:04.963702  353123 cri.go:89] found id: ""
	I1018 09:44:04.963729  353123 logs.go:282] 0 containers: []
	W1018 09:44:04.963741  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:04.963748  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:04.963806  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:05.000010  353123 cri.go:89] found id: ""
	I1018 09:44:05.000041  353123 logs.go:282] 0 containers: []
	W1018 09:44:05.000052  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:05.000060  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:05.000121  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:05.035523  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:05.035549  353123 cri.go:89] found id: ""
	I1018 09:44:05.035560  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:05.035630  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:05.040903  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:05.040971  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:05.076714  353123 cri.go:89] found id: ""
	I1018 09:44:05.076746  353123 logs.go:282] 0 containers: []
	W1018 09:44:05.076758  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:05.076765  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:05.076856  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:05.112594  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:05.112619  353123 cri.go:89] found id: ""
	I1018 09:44:05.112629  353123 logs.go:282] 1 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7]
	I1018 09:44:05.112694  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:05.117677  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:05.117748  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:05.151934  353123 cri.go:89] found id: ""
	I1018 09:44:05.151962  353123 logs.go:282] 0 containers: []
	W1018 09:44:05.151972  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:05.151980  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:05.152038  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:05.186779  353123 cri.go:89] found id: ""
	I1018 09:44:05.186810  353123 logs.go:282] 0 containers: []
	W1018 09:44:05.186834  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:05.186845  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:44:05.186863  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:05.231206  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:05.231246  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:05.295779  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:44:05.295832  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:05.331030  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:05.331067  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:05.397158  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:05.397194  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:05.428937  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:05.428966  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:05.509640  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:05.509673  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:05.528480  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:05.528507  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:05.593478  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:08.095101  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:08.095520  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:08.095579  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:08.095636  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:08.124614  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:08.124632  353123 cri.go:89] found id: ""
	I1018 09:44:08.124640  353123 logs.go:282] 1 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a]
	I1018 09:44:08.124693  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:08.128666  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:08.128725  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:08.154936  353123 cri.go:89] found id: ""
	I1018 09:44:08.154965  353123 logs.go:282] 0 containers: []
	W1018 09:44:08.154976  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:08.154985  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:08.155052  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:08.180690  353123 cri.go:89] found id: ""
	I1018 09:44:08.180714  353123 logs.go:282] 0 containers: []
	W1018 09:44:08.180724  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:08.180732  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:08.180789  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:08.206537  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:08.206558  353123 cri.go:89] found id: ""
	I1018 09:44:08.206568  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:08.206629  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:08.210512  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:08.210571  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:08.235865  353123 cri.go:89] found id: ""
	I1018 09:44:08.235889  353123 logs.go:282] 0 containers: []
	W1018 09:44:08.235897  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:08.235904  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:08.235959  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:08.262042  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:08.262064  353123 cri.go:89] found id: ""
	I1018 09:44:08.262073  353123 logs.go:282] 1 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7]
	I1018 09:44:08.262131  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:08.265937  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:08.265992  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:08.291624  353123 cri.go:89] found id: ""
	I1018 09:44:08.291651  353123 logs.go:282] 0 containers: []
	W1018 09:44:08.291660  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:08.291666  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:08.291714  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:08.318553  353123 cri.go:89] found id: ""
	I1018 09:44:08.318582  353123 logs.go:282] 0 containers: []
	W1018 09:44:08.318592  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:08.318601  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:08.318624  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:08.337532  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:08.337561  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:08.393037  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:08.393059  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:44:08.393074  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:08.427614  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:08.427645  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:08.474784  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:44:08.474828  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:08.501654  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:08.501682  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:08.546229  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:08.546263  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:08.576106  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:08.576135  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1018 09:44:07.606026  366919 pod_ready.go:104] pod "coredns-66bc5c9577-pck54" is not "Ready", error: <nil>
	W1018 09:44:09.606923  366919 pod_ready.go:104] pod "coredns-66bc5c9577-pck54" is not "Ready", error: <nil>
	I1018 09:44:11.149661  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:11.150103  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:11.150151  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:11.150205  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:11.176524  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:11.176551  353123 cri.go:89] found id: ""
	I1018 09:44:11.176562  353123 logs.go:282] 1 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a]
	I1018 09:44:11.176621  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:11.180677  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:11.180746  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:11.206839  353123 cri.go:89] found id: ""
	I1018 09:44:11.206865  353123 logs.go:282] 0 containers: []
	W1018 09:44:11.206876  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:11.206884  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:11.206935  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:11.232446  353123 cri.go:89] found id: ""
	I1018 09:44:11.232486  353123 logs.go:282] 0 containers: []
	W1018 09:44:11.232498  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:11.232507  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:11.232569  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:11.259690  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:11.259717  353123 cri.go:89] found id: ""
	I1018 09:44:11.259728  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:11.259788  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:11.263862  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:11.263929  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:11.290304  353123 cri.go:89] found id: ""
	I1018 09:44:11.290333  353123 logs.go:282] 0 containers: []
	W1018 09:44:11.290343  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:11.290351  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:11.290415  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:11.317474  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:11.317499  353123 cri.go:89] found id: ""
	I1018 09:44:11.317509  353123 logs.go:282] 1 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7]
	I1018 09:44:11.317563  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:11.321537  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:11.321610  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:11.349912  353123 cri.go:89] found id: ""
	I1018 09:44:11.349943  353123 logs.go:282] 0 containers: []
	W1018 09:44:11.349955  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:11.349964  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:11.350101  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:11.377180  353123 cri.go:89] found id: ""
	I1018 09:44:11.377208  353123 logs.go:282] 0 containers: []
	W1018 09:44:11.377219  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:11.377232  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:11.377255  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:11.421302  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:44:11.421338  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:11.448331  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:11.448356  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:11.494879  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:11.494915  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:11.525200  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:11.525227  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:11.601275  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:11.601309  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:11.620467  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:11.620494  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:11.678481  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:11.678502  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:44:11.678521  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	W1018 09:44:12.106646  366919 pod_ready.go:104] pod "coredns-66bc5c9577-pck54" is not "Ready", error: <nil>
	W1018 09:44:14.106811  366919 pod_ready.go:104] pod "coredns-66bc5c9577-pck54" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.604278772Z" level=info msg="Created container e76b86527972d2fcd5547561249cd9984ef49dee338f65322663e5ffad7acafc: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d/dashboard-metrics-scraper" id=e52f3f4c-4f40-4d7b-a55c-29edd30ae6ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.604937331Z" level=info msg="Starting container: e76b86527972d2fcd5547561249cd9984ef49dee338f65322663e5ffad7acafc" id=887915b9-2358-4a00-ac87-dba57fb24af2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.60707943Z" level=info msg="Started container" PID=1718 containerID=e76b86527972d2fcd5547561249cd9984ef49dee338f65322663e5ffad7acafc description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d/dashboard-metrics-scraper id=887915b9-2358-4a00-ac87-dba57fb24af2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ceb5ef0e56991cab30400c892ee50ee900dbba37e2ad24b03d4226197441651
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.865636616Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e07379b2-f9dd-49ca-9071-562f1dbadb92 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.868644718Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=95085fa6-7b80-47a5-8871-b91fb0099e4f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.871420011Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d/dashboard-metrics-scraper" id=0afe629b-5101-4fe6-9505-442f3d821404 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.873320763Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.882250793Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.882902469Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.908028362Z" level=info msg="Created container 23f28b00046885ed3722a8258f6cd92c80978d95a4a0c82c1094e9b69cf27e37: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d/dashboard-metrics-scraper" id=0afe629b-5101-4fe6-9505-442f3d821404 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.908610399Z" level=info msg="Starting container: 23f28b00046885ed3722a8258f6cd92c80978d95a4a0c82c1094e9b69cf27e37" id=fd6633bf-d2d2-487f-8ab4-83f767bf7998 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.910638483Z" level=info msg="Started container" PID=1747 containerID=23f28b00046885ed3722a8258f6cd92c80978d95a4a0c82c1094e9b69cf27e37 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d/dashboard-metrics-scraper id=fd6633bf-d2d2-487f-8ab4-83f767bf7998 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ceb5ef0e56991cab30400c892ee50ee900dbba37e2ad24b03d4226197441651
	Oct 18 09:44:03 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:03.871897889Z" level=info msg="Removing container: e76b86527972d2fcd5547561249cd9984ef49dee338f65322663e5ffad7acafc" id=963761fe-2cdf-436a-8799-0b6cbcfe5f8f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:44:03 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:03.882932952Z" level=info msg="Removed container e76b86527972d2fcd5547561249cd9984ef49dee338f65322663e5ffad7acafc: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d/dashboard-metrics-scraper" id=963761fe-2cdf-436a-8799-0b6cbcfe5f8f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.895720602Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e98f5802-0d1e-448f-b0bf-5e831e6d40a9 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.896656095Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=04b4183f-62aa-419b-a034-68ea7e025f78 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.897585754Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e91fd285-04a2-4822-993a-10f81840915b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.89788111Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.9032139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.903448944Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b46f0eaf384e6736dee533f3a22d80498dcd4493d52943f6144839a4b63bd7c7/merged/etc/passwd: no such file or directory"
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.903594105Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b46f0eaf384e6736dee533f3a22d80498dcd4493d52943f6144839a4b63bd7c7/merged/etc/group: no such file or directory"
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.904036138Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.928018886Z" level=info msg="Created container 5b9a25d7ca89e5a5f227c89e4c65fda1f57fea58ab2f00baf53866e09b9a19ea: kube-system/storage-provisioner/storage-provisioner" id=e91fd285-04a2-4822-993a-10f81840915b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.928572492Z" level=info msg="Starting container: 5b9a25d7ca89e5a5f227c89e4c65fda1f57fea58ab2f00baf53866e09b9a19ea" id=af481f04-fc67-457e-a941-4ed3c8e0e311 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.930346852Z" level=info msg="Started container" PID=1761 containerID=5b9a25d7ca89e5a5f227c89e4c65fda1f57fea58ab2f00baf53866e09b9a19ea description=kube-system/storage-provisioner/storage-provisioner id=af481f04-fc67-457e-a941-4ed3c8e0e311 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1f262ba63a4f9a3bfc95e4d7eb0e4ad95dec1f73cc8610145db80589932e4821
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	5b9a25d7ca89e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           3 seconds ago       Running             storage-provisioner         1                   1f262ba63a4f9       storage-provisioner                              kube-system
	23f28b0004688       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   1                   7ceb5ef0e5699       dashboard-metrics-scraper-5f989dc9cf-fm56d       kubernetes-dashboard
	2d6a72283c35f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   17 seconds ago      Running             kubernetes-dashboard        0                   a49233e499b89       kubernetes-dashboard-8694d4445c-88pgw            kubernetes-dashboard
	3d71415e5d23f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           26 seconds ago      Running             coredns                     0                   15879cd00e6d7       coredns-5dd5756b68-wklp4                         kube-system
	f97eabcf99d6b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           26 seconds ago      Running             busybox                     1                   094fce3a1dc97       busybox                                          default
	868ad4152848f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           34 seconds ago      Exited              storage-provisioner         0                   1f262ba63a4f9       storage-provisioner                              kube-system
	2d9de25ec275f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           34 seconds ago      Running             kindnet-cni                 0                   5bf812a56d015       kindnet-vpnhf                                    kube-system
	9bac4afda2cd6       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           34 seconds ago      Running             kube-proxy                  0                   1bce143ff28f6       kube-proxy-spkr8                                 kube-system
	7fe7bf854b172       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           37 seconds ago      Running             kube-scheduler              0                   373a1c04046f4       kube-scheduler-old-k8s-version-619885            kube-system
	fdfeb0ddcbc9e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           37 seconds ago      Running             etcd                        0                   adab16a038eaa       etcd-old-k8s-version-619885                      kube-system
	9dea26c3889d8       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           37 seconds ago      Running             kube-controller-manager     0                   2a7cdbf4dfa90       kube-controller-manager-old-k8s-version-619885   kube-system
	c46ec81af1bdf       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           37 seconds ago      Running             kube-apiserver              0                   904a10e46f596       kube-apiserver-old-k8s-version-619885            kube-system
	
	
	==> coredns [3d71415e5d23f091c256ec69cb6bd08bff295fdc3222434e5978054f55cd858a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54024 - 50201 "HINFO IN 1289931151697642964.8890851655498100000. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.067747865s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-619885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-619885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=old-k8s-version-619885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_42_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:42:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-619885
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:44:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:44:13 +0000   Sat, 18 Oct 2025 09:42:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:44:13 +0000   Sat, 18 Oct 2025 09:42:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:44:13 +0000   Sat, 18 Oct 2025 09:42:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:44:13 +0000   Sat, 18 Oct 2025 09:43:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-619885
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                5fe2f0a1-057b-421d-9214-f38cf6889451
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 coredns-5dd5756b68-wklp4                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     88s
	  kube-system                 etcd-old-k8s-version-619885                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         103s
	  kube-system                 kindnet-vpnhf                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      88s
	  kube-system                 kube-apiserver-old-k8s-version-619885             250m (3%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-controller-manager-old-k8s-version-619885    200m (2%)     0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-proxy-spkr8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-scheduler-old-k8s-version-619885             100m (1%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-fm56d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-88pgw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 88s                kube-proxy       
	  Normal  Starting                 34s                kube-proxy       
	  Normal  Starting                 102s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  101s               kubelet          Node old-k8s-version-619885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s               kubelet          Node old-k8s-version-619885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s               kubelet          Node old-k8s-version-619885 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           88s                node-controller  Node old-k8s-version-619885 event: Registered Node old-k8s-version-619885 in Controller
	  Normal  NodeReady                75s                kubelet          Node old-k8s-version-619885 status is now: NodeReady
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node old-k8s-version-619885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node old-k8s-version-619885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x8 over 38s)  kubelet          Node old-k8s-version-619885 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22s                node-controller  Node old-k8s-version-619885 event: Registered Node old-k8s-version-619885 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [fdfeb0ddcbc9e81818edeaac2428def9a1bd1e558ad4e23f0d8f6775b7f2c5b9] <==
	{"level":"info","ts":"2025-10-18T09:43:40.345896Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-18T09:43:40.346047Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:43:40.345061Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T09:43:40.346606Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T09:43:40.346629Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T09:43:40.346717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:43:40.34891Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T09:43:40.349126Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T09:43:40.349158Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T09:43:40.349188Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T09:43:40.349199Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T09:43:41.637635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T09:43:41.63768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T09:43:41.637727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-18T09:43:41.637746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T09:43:41.637754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-18T09:43:41.637767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-18T09:43:41.637779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-18T09:43:41.63874Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-619885 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T09:43:41.638749Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:43:41.638779Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:43:41.639089Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T09:43:41.63911Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T09:43:41.640031Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-18T09:43:41.640068Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:44:17 up  1:26,  0 user,  load average: 2.42, 2.85, 1.79
	Linux old-k8s-version-619885 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2d9de25ec275f7a26f89e18a6bf459fac123effa83d7ee72e4855d9b3bd71070] <==
	I1018 09:43:43.304283       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:43:43.304501       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:43:43.304628       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:43:43.304648       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:43:43.304671       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:43:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:43:43.591337       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:43:43.591391       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:43:43.591402       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:43:43.591568       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:43:43.900019       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:43:43.900313       1 metrics.go:72] Registering metrics
	I1018 09:43:43.900395       1 controller.go:711] "Syncing nftables rules"
	I1018 09:43:53.592006       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:43:53.592085       1 main.go:301] handling current node
	I1018 09:44:03.591879       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:44:03.591908       1 main.go:301] handling current node
	I1018 09:44:13.591245       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:44:13.591271       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c46ec81af1bdf64d24ba9e436aeaa90b9063672e95d2002dd2a2ea63c5994da3] <==
	I1018 09:43:42.615176       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 09:43:42.615179       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1018 09:43:42.615326       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 09:43:42.615383       1 aggregator.go:166] initial CRD sync complete...
	I1018 09:43:42.615392       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 09:43:42.615397       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:43:42.615404       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:43:42.615711       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 09:43:42.615753       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 09:43:42.615897       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1018 09:43:42.616194       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	I1018 09:43:43.435704       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 09:43:43.469578       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 09:43:43.486780       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:43:43.494256       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:43:43.500540       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 09:43:43.517125       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:43:43.533160       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.208.17"}
	I1018 09:43:43.546668       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.120.167"}
	E1018 09:43:52.616906       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	I1018 09:43:55.546180       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:43:55.648985       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 09:43:55.710514       1 controller.go:624] quota admission added evaluator for: endpoints
	E1018 09:44:02.617462       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E1018 09:44:12.618368       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [9dea26c3889d8fcde9ef123c494d3c45546f1760d8a72398c746eda2f2f6395b] <==
	I1018 09:43:55.451471       1 shared_informer.go:318] Caches are synced for resource quota
	I1018 09:43:55.655529       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1018 09:43:55.657712       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1018 09:43:55.669343       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-88pgw"
	I1018 09:43:55.670489       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-fm56d"
	I1018 09:43:55.682319       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="25.954478ms"
	I1018 09:43:55.682452       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="27.420465ms"
	I1018 09:43:55.692942       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.38267ms"
	I1018 09:43:55.693157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="71.439µs"
	I1018 09:43:55.701965       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="19.582805ms"
	I1018 09:43:55.702050       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.149µs"
	I1018 09:43:55.707429       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="323.531µs"
	I1018 09:43:55.724102       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.8µs"
	I1018 09:43:55.726611       1 event.go:307] "Event occurred" object="kubernetes-dashboard" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/kubernetes-dashboard: endpoints \"kubernetes-dashboard\" already exists"
	I1018 09:43:55.726652       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1018 09:43:55.770784       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:43:55.819743       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:43:55.819784       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 09:44:00.727141       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.955228ms"
	I1018 09:44:00.727299       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.552µs"
	I1018 09:44:00.887880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.732786ms"
	I1018 09:44:00.888197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.427µs"
	I1018 09:44:02.876172       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.366µs"
	I1018 09:44:03.883302       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="131.947µs"
	I1018 09:44:04.887105       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.445µs"
	
	
	==> kube-proxy [9bac4afda2cd6a56903403041cc289b1df6e5601dec28bc97ecdf4758352ef1f] <==
	I1018 09:43:43.190987       1 server_others.go:69] "Using iptables proxy"
	I1018 09:43:43.201540       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1018 09:43:43.223568       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:43:43.225907       1 server_others.go:152] "Using iptables Proxier"
	I1018 09:43:43.225989       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 09:43:43.226018       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 09:43:43.226081       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 09:43:43.226447       1 server.go:846] "Version info" version="v1.28.0"
	I1018 09:43:43.226503       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:43:43.228920       1 config.go:315] "Starting node config controller"
	I1018 09:43:43.228952       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 09:43:43.228939       1 config.go:188] "Starting service config controller"
	I1018 09:43:43.228982       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 09:43:43.229244       1 config.go:97] "Starting endpoint slice config controller"
	I1018 09:43:43.229259       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 09:43:43.330039       1 shared_informer.go:318] Caches are synced for service config
	I1018 09:43:43.330117       1 shared_informer.go:318] Caches are synced for node config
	I1018 09:43:43.330153       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7fe7bf854b17230485448f3f9edffbf8256278410beebb814098460ced51012a] <==
	E1018 09:43:42.595004       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1018 09:43:42.595008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1018 09:43:42.595006       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1018 09:43:42.594966       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1018 09:43:42.595060       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1018 09:43:42.595066       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1018 09:43:42.595083       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1018 09:43:42.595108       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1018 09:43:42.595082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1018 09:43:42.595142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1018 09:43:42.595149       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1018 09:43:42.595151       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1018 09:43:42.595162       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1018 09:43:42.595165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1018 09:43:42.595172       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1018 09:43:42.595180       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1018 09:43:42.595227       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1018 09:43:42.595241       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1018 09:43:42.595224       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1018 09:43:42.595543       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1018 09:43:42.595567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1018 09:43:42.595576       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1018 09:43:42.595587       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1018 09:43:42.595586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1018 09:43:42.686154       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 09:43:46 old-k8s-version-619885 kubelet[719]: E1018 09:43:46.406379     719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/666ccb81-9bb0-4ee0-8fe1-8d060091f9b0-config-volume podName:666ccb81-9bb0-4ee0-8fe1-8d060091f9b0 nodeName:}" failed. No retries permitted until 2025-10-18 09:43:50.406364612 +0000 UTC m=+10.703228459 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/666ccb81-9bb0-4ee0-8fe1-8d060091f9b0-config-volume") pod "coredns-5dd5756b68-wklp4" (UID: "666ccb81-9bb0-4ee0-8fe1-8d060091f9b0") : object "kube-system"/"coredns" not registered
	Oct 18 09:43:46 old-k8s-version-619885 kubelet[719]: E1018 09:43:46.506785     719 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Oct 18 09:43:46 old-k8s-version-619885 kubelet[719]: E1018 09:43:46.506855     719 projected.go:198] Error preparing data for projected volume kube-api-access-55xz5 for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Oct 18 09:43:46 old-k8s-version-619885 kubelet[719]: E1018 09:43:46.506932     719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e50d21c-d2e2-4cc7-b111-04c19153fc41-kube-api-access-55xz5 podName:2e50d21c-d2e2-4cc7-b111-04c19153fc41 nodeName:}" failed. No retries permitted until 2025-10-18 09:43:50.506910652 +0000 UTC m=+10.803774504 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-55xz5" (UniqueName: "kubernetes.io/projected/2e50d21c-d2e2-4cc7-b111-04c19153fc41-kube-api-access-55xz5") pod "busybox" (UID: "2e50d21c-d2e2-4cc7-b111-04c19153fc41") : object "default"/"kube-root-ca.crt" not registered
	Oct 18 09:43:55 old-k8s-version-619885 kubelet[719]: I1018 09:43:55.676760     719 topology_manager.go:215] "Topology Admit Handler" podUID="7390a37b-b66c-4dbe-85de-5ba96c9a7f24" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-88pgw"
	Oct 18 09:43:55 old-k8s-version-619885 kubelet[719]: I1018 09:43:55.681503     719 topology_manager.go:215] "Topology Admit Handler" podUID="b01b0763-878c-4706-a4ce-1b579eac767d" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-fm56d"
	Oct 18 09:43:55 old-k8s-version-619885 kubelet[719]: I1018 09:43:55.760271     719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7390a37b-b66c-4dbe-85de-5ba96c9a7f24-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-88pgw\" (UID: \"7390a37b-b66c-4dbe-85de-5ba96c9a7f24\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-88pgw"
	Oct 18 09:43:55 old-k8s-version-619885 kubelet[719]: I1018 09:43:55.760343     719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r7hn\" (UniqueName: \"kubernetes.io/projected/b01b0763-878c-4706-a4ce-1b579eac767d-kube-api-access-5r7hn\") pod \"dashboard-metrics-scraper-5f989dc9cf-fm56d\" (UID: \"b01b0763-878c-4706-a4ce-1b579eac767d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d"
	Oct 18 09:43:55 old-k8s-version-619885 kubelet[719]: I1018 09:43:55.760775     719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcqhq\" (UniqueName: \"kubernetes.io/projected/7390a37b-b66c-4dbe-85de-5ba96c9a7f24-kube-api-access-rcqhq\") pod \"kubernetes-dashboard-8694d4445c-88pgw\" (UID: \"7390a37b-b66c-4dbe-85de-5ba96c9a7f24\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-88pgw"
	Oct 18 09:43:55 old-k8s-version-619885 kubelet[719]: I1018 09:43:55.760873     719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b01b0763-878c-4706-a4ce-1b579eac767d-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-fm56d\" (UID: \"b01b0763-878c-4706-a4ce-1b579eac767d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d"
	Oct 18 09:44:02 old-k8s-version-619885 kubelet[719]: I1018 09:44:02.865075     719 scope.go:117] "RemoveContainer" containerID="e76b86527972d2fcd5547561249cd9984ef49dee338f65322663e5ffad7acafc"
	Oct 18 09:44:02 old-k8s-version-619885 kubelet[719]: I1018 09:44:02.876075     719 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-88pgw" podStartSLOduration=3.855431169 podCreationTimestamp="2025-10-18 09:43:55 +0000 UTC" firstStartedPulling="2025-10-18 09:43:56.00286729 +0000 UTC m=+16.299731127" lastFinishedPulling="2025-10-18 09:44:00.023454168 +0000 UTC m=+20.320318015" observedRunningTime="2025-10-18 09:44:00.876183203 +0000 UTC m=+21.173047085" watchObservedRunningTime="2025-10-18 09:44:02.876018057 +0000 UTC m=+23.172881912"
	Oct 18 09:44:03 old-k8s-version-619885 kubelet[719]: I1018 09:44:03.870479     719 scope.go:117] "RemoveContainer" containerID="e76b86527972d2fcd5547561249cd9984ef49dee338f65322663e5ffad7acafc"
	Oct 18 09:44:03 old-k8s-version-619885 kubelet[719]: I1018 09:44:03.870687     719 scope.go:117] "RemoveContainer" containerID="23f28b00046885ed3722a8258f6cd92c80978d95a4a0c82c1094e9b69cf27e37"
	Oct 18 09:44:03 old-k8s-version-619885 kubelet[719]: E1018 09:44:03.871062     719 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fm56d_kubernetes-dashboard(b01b0763-878c-4706-a4ce-1b579eac767d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d" podUID="b01b0763-878c-4706-a4ce-1b579eac767d"
	Oct 18 09:44:04 old-k8s-version-619885 kubelet[719]: I1018 09:44:04.875354     719 scope.go:117] "RemoveContainer" containerID="23f28b00046885ed3722a8258f6cd92c80978d95a4a0c82c1094e9b69cf27e37"
	Oct 18 09:44:04 old-k8s-version-619885 kubelet[719]: E1018 09:44:04.875744     719 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fm56d_kubernetes-dashboard(b01b0763-878c-4706-a4ce-1b579eac767d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d" podUID="b01b0763-878c-4706-a4ce-1b579eac767d"
	Oct 18 09:44:05 old-k8s-version-619885 kubelet[719]: I1018 09:44:05.983233     719 scope.go:117] "RemoveContainer" containerID="23f28b00046885ed3722a8258f6cd92c80978d95a4a0c82c1094e9b69cf27e37"
	Oct 18 09:44:05 old-k8s-version-619885 kubelet[719]: E1018 09:44:05.983591     719 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fm56d_kubernetes-dashboard(b01b0763-878c-4706-a4ce-1b579eac767d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d" podUID="b01b0763-878c-4706-a4ce-1b579eac767d"
	Oct 18 09:44:13 old-k8s-version-619885 kubelet[719]: I1018 09:44:13.895325     719 scope.go:117] "RemoveContainer" containerID="868ad4152848f6a63e0415e5c6a4814b9c54f45ecbc00e5d458c6b9cedd73597"
	Oct 18 09:44:14 old-k8s-version-619885 kubelet[719]: I1018 09:44:14.621227     719 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 18 09:44:14 old-k8s-version-619885 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:44:14 old-k8s-version-619885 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:44:14 old-k8s-version-619885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:44:14 old-k8s-version-619885 systemd[1]: kubelet.service: Consumed 1.205s CPU time.
	
	
	==> kubernetes-dashboard [2d6a72283c35fffb748de47518ddeea3904e292dbab05a98cbc4f1cc59c4ba64] <==
	2025/10/18 09:44:00 Using namespace: kubernetes-dashboard
	2025/10/18 09:44:00 Using in-cluster config to connect to apiserver
	2025/10/18 09:44:00 Using secret token for csrf signing
	2025/10/18 09:44:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:44:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:44:00 Successful initial request to the apiserver, version: v1.28.0
	2025/10/18 09:44:00 Generating JWE encryption key
	2025/10/18 09:44:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:44:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:44:00 Initializing JWE encryption key from synchronized object
	2025/10/18 09:44:00 Creating in-cluster Sidecar client
	2025/10/18 09:44:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:44:00 Serving insecurely on HTTP port: 9090
	2025/10/18 09:44:00 Starting overwatch
	
	
	==> storage-provisioner [5b9a25d7ca89e5a5f227c89e4c65fda1f57fea58ab2f00baf53866e09b9a19ea] <==
	I1018 09:44:13.943233       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:44:13.951919       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:44:13.951970       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [868ad4152848f6a63e0415e5c6a4814b9c54f45ecbc00e5d458c6b9cedd73597] <==
	I1018 09:43:43.159793       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:44:13.163097       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-619885 -n old-k8s-version-619885
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-619885 -n old-k8s-version-619885: exit status 2 (320.550485ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-619885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-619885
helpers_test.go:243: (dbg) docker inspect old-k8s-version-619885:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191",
	        "Created": "2025-10-18T09:42:17.27822051Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 364774,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:43:33.815850746Z",
	            "FinishedAt": "2025-10-18T09:43:33.019788086Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191/hosts",
	        "LogPath": "/var/lib/docker/containers/1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191/1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191-json.log",
	        "Name": "/old-k8s-version-619885",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-619885:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-619885",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1ed6b6e47d49fb98b9b0a5d9d4f4e9f6e9d80d6e87013b2239ef8116a6d76191",
	                "LowerDir": "/var/lib/docker/overlay2/e897cdf36c8a11bd13de0e7fe8917a83f0cd612a7600cb4b1650393d6766bf33-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e897cdf36c8a11bd13de0e7fe8917a83f0cd612a7600cb4b1650393d6766bf33/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e897cdf36c8a11bd13de0e7fe8917a83f0cd612a7600cb4b1650393d6766bf33/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e897cdf36c8a11bd13de0e7fe8917a83f0cd612a7600cb4b1650393d6766bf33/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-619885",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-619885/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-619885",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-619885",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-619885",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8b7f5d056d403b4312f8e4d5df0917c98c1d0d6970ae9a0ad0d8374b29dbc1b3",
	            "SandboxKey": "/var/run/docker/netns/8b7f5d056d40",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33191"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33192"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33195"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33193"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33194"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-619885": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:ab:ca:79:f3:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f172a0295669142d53ec5906c89946014e1c53fe54e9e8bba2fffa329bff8586",
	                    "EndpointID": "321df6cce21b4d40cedb28e419b6b1828be8af7d5372958373eda7681745fcda",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-619885",
	                        "1ed6b6e47d49"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-619885 -n old-k8s-version-619885
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-619885 -n old-k8s-version-619885: exit status 2 (315.007588ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-619885 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-619885 logs -n 25: (1.077758025s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ pause   │ -p pause-238319 --alsologtostderr -v=5                                                                                                                                                                                                        │ pause-238319              │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │                     │
	│ delete  │ -p pause-238319                                                                                                                                                                                                                               │ pause-238319              │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p cert-options-310417 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:42 UTC │
	│ start   │ -p missing-upgrade-631894 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                      │ missing-upgrade-631894    │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:42 UTC │
	│ ssh     │ force-systemd-flag-565668 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-565668 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ delete  │ -p force-systemd-flag-565668                                                                                                                                                                                                                  │ force-systemd-flag-565668 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:41 UTC │
	│ start   │ -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:42 UTC │
	│ ssh     │ cert-options-310417 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ ssh     │ -p cert-options-310417 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ delete  │ -p cert-options-310417                                                                                                                                                                                                                        │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ stop    │ -p kubernetes-upgrade-919613                                                                                                                                                                                                                  │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ start   │ -p old-k8s-version-619885 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │                     │
	│ delete  │ -p missing-upgrade-631894                                                                                                                                                                                                                     │ missing-upgrade-631894    │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ start   │ -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-619885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	│ stop    │ -p old-k8s-version-619885 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-589869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	│ stop    │ -p no-preload-589869 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-619885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p old-k8s-version-619885 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:44 UTC │
	│ addons  │ enable dashboard -p no-preload-589869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	│ image   │ old-k8s-version-619885 image list --format=json                                                                                                                                                                                               │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ pause   │ -p old-k8s-version-619885 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:43:45
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:43:45.888389  366919 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:43:45.888659  366919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:43:45.888668  366919 out.go:374] Setting ErrFile to fd 2...
	I1018 09:43:45.888672  366919 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:43:45.888914  366919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:43:45.889335  366919 out.go:368] Setting JSON to false
	I1018 09:43:45.890614  366919 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5170,"bootTime":1760775456,"procs":325,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:43:45.890707  366919 start.go:141] virtualization: kvm guest
	I1018 09:43:45.892590  366919 out.go:179] * [no-preload-589869] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:43:45.893663  366919 notify.go:220] Checking for updates...
	I1018 09:43:45.893672  366919 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:43:45.894765  366919 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:43:45.895898  366919 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:43:45.897118  366919 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:43:45.898213  366919 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:43:45.899245  366919 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:43:45.900700  366919 config.go:182] Loaded profile config "no-preload-589869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:43:45.901184  366919 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:43:45.924781  366919 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:43:45.924886  366919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:43:45.981626  366919 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 09:43:45.971756736 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:43:45.981735  366919 docker.go:318] overlay module found
	I1018 09:43:45.983342  366919 out.go:179] * Using the docker driver based on existing profile
	I1018 09:43:45.984469  366919 start.go:305] selected driver: docker
	I1018 09:43:45.984486  366919 start.go:925] validating driver "docker" against &{Name:no-preload-589869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:43:45.984565  366919 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:43:45.985110  366919 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:43:46.037775  366919 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 09:43:46.028344169 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:43:46.038191  366919 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:43:46.038224  366919 cni.go:84] Creating CNI manager for ""
	I1018 09:43:46.038282  366919 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:43:46.038328  366919 start.go:349] cluster config:
	{Name:no-preload-589869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:43:46.040386  366919 out.go:179] * Starting "no-preload-589869" primary control-plane node in "no-preload-589869" cluster
	I1018 09:43:46.041380  366919 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:43:46.042560  366919 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:43:46.043522  366919 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:43:46.043617  366919 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:43:46.043675  366919 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/config.json ...
	I1018 09:43:46.043842  366919 cache.go:107] acquiring lock: {Name:mk8d380524b774b5edadec7411def9ea12a01591 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.043848  366919 cache.go:107] acquiring lock: {Name:mka49eac321c9a155354693a3a6be91b02cdc4a5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.043918  366919 cache.go:107] acquiring lock: {Name:mka2dd49281e4623d770ed33d958b114b7cc789f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.043868  366919 cache.go:107] acquiring lock: {Name:mk3d292d197011122be585423e2f701ad4e9ea53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.043929  366919 cache.go:107] acquiring lock: {Name:mk2f4cf60554cd9991205940f1aa9911f9bb383a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.043985  366919 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1018 09:43:46.043957  366919 cache.go:107] acquiring lock: {Name:mka90deb6de3b7e19386c6d0f0785fc3e96d2e63 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.043995  366919 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1018 09:43:46.043996  366919 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1018 09:43:46.043996  366919 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 78.503µs
	I1018 09:43:46.044005  366919 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 89.399µs
	I1018 09:43:46.044007  366919 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 156.418µs
	I1018 09:43:46.043987  366919 cache.go:107] acquiring lock: {Name:mk9ad0aa9744bfc6007683a43233309af99e2ada Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.044018  366919 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1018 09:43:46.044018  366919 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1018 09:43:46.044012  366919 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1018 09:43:46.043998  366919 cache.go:107] acquiring lock: {Name:mk61b8919142cd1b35d71e72ba258fc114b79f81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.044047  366919 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 246.637µs
	I1018 09:43:46.044055  366919 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1018 09:43:46.044104  366919 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1018 09:43:46.043985  366919 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1018 09:43:46.044129  366919 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 223.377µs
	I1018 09:43:46.044138  366919 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1018 09:43:46.044143  366919 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 339.93µs
	I1018 09:43:46.044150  366919 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1018 09:43:46.044158  366919 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1018 09:43:46.044019  366919 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1018 09:43:46.044160  366919 cache.go:115] /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1018 09:43:46.044200  366919 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 259.875µs
	I1018 09:43:46.044212  366919 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1018 09:43:46.044165  366919 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 222.054µs
	I1018 09:43:46.044220  366919 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1018 09:43:46.044229  366919 cache.go:87] Successfully saved all images to host disk.
	I1018 09:43:46.066081  366919 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:43:46.066101  366919 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:43:46.066116  366919 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:43:46.066137  366919 start.go:360] acquireMachinesLock for no-preload-589869: {Name:mk63da8322dd3ab3d8f833b8b716fde137314571 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:43:46.066187  366919 start.go:364] duration metric: took 35.579µs to acquireMachinesLock for "no-preload-589869"
	I1018 09:43:46.066204  366919 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:43:46.066212  366919 fix.go:54] fixHost starting: 
	I1018 09:43:46.066405  366919 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:46.083586  366919 fix.go:112] recreateIfNeeded on no-preload-589869: state=Stopped err=<nil>
	W1018 09:43:46.083616  366919 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:43:44.053069  364574 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:43:44.059054  364574 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:43:44.060491  364574 api_server.go:141] control plane version: v1.28.0
	I1018 09:43:44.060514  364574 api_server.go:131] duration metric: took 507.720119ms to wait for apiserver health ...
	I1018 09:43:44.060523  364574 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:43:44.064165  364574 system_pods.go:59] 8 kube-system pods found
	I1018 09:43:44.064203  364574 system_pods.go:61] "coredns-5dd5756b68-wklp4" [666ccb81-9bb0-4ee0-8fe1-8d060091f9b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:44.064216  364574 system_pods.go:61] "etcd-old-k8s-version-619885" [dca6ec98-a949-42a4-9c2a-bc2a1a60f5c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:43:44.064228  364574 system_pods.go:61] "kindnet-vpnhf" [4dadafc2-f316-4101-b535-142210628ad3] Running
	I1018 09:43:44.064239  364574 system_pods.go:61] "kube-apiserver-old-k8s-version-619885" [09a3e16e-e8bd-4c4f-9605-65df6c74f6df] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:43:44.064249  364574 system_pods.go:61] "kube-controller-manager-old-k8s-version-619885" [07f81e84-301e-4a8b-9b8e-3c04e4325a0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:43:44.064255  364574 system_pods.go:61] "kube-proxy-spkr8" [74de2fd0-602e-4deb-942b-b2d6236b4472] Running
	I1018 09:43:44.064263  364574 system_pods.go:61] "kube-scheduler-old-k8s-version-619885" [688ca738-cfaf-4503-90db-35314c08ac6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:43:44.064272  364574 system_pods.go:61] "storage-provisioner" [398d98bd-a962-40a6-ba34-a3d0a5ea35ca] Running
	I1018 09:43:44.064280  364574 system_pods.go:74] duration metric: took 3.752222ms to wait for pod list to return data ...
	I1018 09:43:44.064293  364574 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:43:44.066175  364574 default_sa.go:45] found service account: "default"
	I1018 09:43:44.066192  364574 default_sa.go:55] duration metric: took 1.892091ms for default service account to be created ...
	I1018 09:43:44.066200  364574 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:43:44.069244  364574 system_pods.go:86] 8 kube-system pods found
	I1018 09:43:44.069272  364574 system_pods.go:89] "coredns-5dd5756b68-wklp4" [666ccb81-9bb0-4ee0-8fe1-8d060091f9b0] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:44.069283  364574 system_pods.go:89] "etcd-old-k8s-version-619885" [dca6ec98-a949-42a4-9c2a-bc2a1a60f5c2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:43:44.069295  364574 system_pods.go:89] "kindnet-vpnhf" [4dadafc2-f316-4101-b535-142210628ad3] Running
	I1018 09:43:44.069305  364574 system_pods.go:89] "kube-apiserver-old-k8s-version-619885" [09a3e16e-e8bd-4c4f-9605-65df6c74f6df] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:43:44.069311  364574 system_pods.go:89] "kube-controller-manager-old-k8s-version-619885" [07f81e84-301e-4a8b-9b8e-3c04e4325a0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:43:44.069321  364574 system_pods.go:89] "kube-proxy-spkr8" [74de2fd0-602e-4deb-942b-b2d6236b4472] Running
	I1018 09:43:44.069329  364574 system_pods.go:89] "kube-scheduler-old-k8s-version-619885" [688ca738-cfaf-4503-90db-35314c08ac6d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:43:44.069337  364574 system_pods.go:89] "storage-provisioner" [398d98bd-a962-40a6-ba34-a3d0a5ea35ca] Running
	I1018 09:43:44.069351  364574 system_pods.go:126] duration metric: took 3.145847ms to wait for k8s-apps to be running ...
	I1018 09:43:44.069363  364574 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:43:44.069414  364574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:43:44.082213  364574 system_svc.go:56] duration metric: took 12.842491ms WaitForService to wait for kubelet
	I1018 09:43:44.082235  364574 kubeadm.go:586] duration metric: took 3.64667679s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:43:44.082253  364574 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:43:44.084683  364574 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:43:44.084708  364574 node_conditions.go:123] node cpu capacity is 8
	I1018 09:43:44.084721  364574 node_conditions.go:105] duration metric: took 2.464173ms to run NodePressure ...
	I1018 09:43:44.084734  364574 start.go:241] waiting for startup goroutines ...
	I1018 09:43:44.084743  364574 start.go:246] waiting for cluster config update ...
	I1018 09:43:44.084758  364574 start.go:255] writing updated cluster config ...
	I1018 09:43:44.085061  364574 ssh_runner.go:195] Run: rm -f paused
	I1018 09:43:44.088911  364574 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:43:44.093426  364574 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-wklp4" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:43:46.101089  364574 pod_ready.go:104] pod "coredns-5dd5756b68-wklp4" is not "Ready", error: node "old-k8s-version-619885" hosting pod "coredns-5dd5756b68-wklp4" is not "Ready" (will retry)
	I1018 09:43:46.224301  353123 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.054994591s)
	W1018 09:43:46.224348  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1018 09:43:46.224359  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:43:46.224376  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:46.259157  353123 logs.go:123] Gathering logs for kube-apiserver [bf3494d5bef6582646cb9fd62b020501a179e17ec4ae80baaa34921388e4d2c9] ...
	I1018 09:43:46.259190  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf3494d5bef6582646cb9fd62b020501a179e17ec4ae80baaa34921388e4d2c9"
	I1018 09:43:46.292558  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:43:46.292596  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:43:46.339561  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:43:46.339652  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:43:46.086041  366919 out.go:252] * Restarting existing docker container for "no-preload-589869" ...
	I1018 09:43:46.086128  366919 cli_runner.go:164] Run: docker start no-preload-589869
	I1018 09:43:46.330566  366919 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:46.349925  366919 kic.go:430] container "no-preload-589869" state is running.
	I1018 09:43:46.350504  366919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-589869
	I1018 09:43:46.369484  366919 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/config.json ...
	I1018 09:43:46.369785  366919 machine.go:93] provisionDockerMachine start ...
	I1018 09:43:46.369895  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:46.389951  366919 main.go:141] libmachine: Using SSH client type: native
	I1018 09:43:46.390197  366919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33196 <nil> <nil>}
	I1018 09:43:46.390213  366919 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:43:46.390886  366919 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54370->127.0.0.1:33196: read: connection reset by peer
	I1018 09:43:49.528799  366919 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-589869
	
	I1018 09:43:49.528852  366919 ubuntu.go:182] provisioning hostname "no-preload-589869"
	I1018 09:43:49.528927  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:49.546576  366919 main.go:141] libmachine: Using SSH client type: native
	I1018 09:43:49.546787  366919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33196 <nil> <nil>}
	I1018 09:43:49.546801  366919 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-589869 && echo "no-preload-589869" | sudo tee /etc/hostname
	I1018 09:43:49.689515  366919 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-589869
	
	I1018 09:43:49.689617  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:49.707758  366919 main.go:141] libmachine: Using SSH client type: native
	I1018 09:43:49.708074  366919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33196 <nil> <nil>}
	I1018 09:43:49.708102  366919 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-589869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-589869/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-589869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:43:49.841538  366919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:43:49.841582  366919 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:43:49.841607  366919 ubuntu.go:190] setting up certificates
	I1018 09:43:49.841619  366919 provision.go:84] configureAuth start
	I1018 09:43:49.841677  366919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-589869
	I1018 09:43:49.860015  366919 provision.go:143] copyHostCerts
	I1018 09:43:49.860089  366919 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:43:49.860108  366919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:43:49.860195  366919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:43:49.860343  366919 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:43:49.860357  366919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:43:49.860401  366919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:43:49.860495  366919 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:43:49.860506  366919 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:43:49.860545  366919 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:43:49.860628  366919 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.no-preload-589869 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-589869]
	I1018 09:43:50.148919  366919 provision.go:177] copyRemoteCerts
	I1018 09:43:50.148980  366919 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:43:50.149021  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:50.166754  366919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:50.263430  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:43:50.281417  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:43:50.298517  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:43:50.315182  366919 provision.go:87] duration metric: took 473.546028ms to configureAuth
	I1018 09:43:50.315208  366919 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:43:50.315369  366919 config.go:182] Loaded profile config "no-preload-589869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:43:50.315472  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:50.332788  366919 main.go:141] libmachine: Using SSH client type: native
	I1018 09:43:50.333021  366919 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33196 <nil> <nil>}
	I1018 09:43:50.333040  366919 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:43:50.619905  366919 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:43:50.619938  366919 machine.go:96] duration metric: took 4.250134197s to provisionDockerMachine
	I1018 09:43:50.619954  366919 start.go:293] postStartSetup for "no-preload-589869" (driver="docker")
	I1018 09:43:50.619967  366919 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:43:50.620044  366919 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:43:50.620100  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:50.638702  366919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:50.737639  366919 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:43:50.741946  366919 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:43:50.741983  366919 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:43:50.741998  366919 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:43:50.742054  366919 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:43:50.742158  366919 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:43:50.742279  366919 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:43:50.751096  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:43:50.772874  366919 start.go:296] duration metric: took 152.899929ms for postStartSetup
	I1018 09:43:50.772967  366919 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:43:50.773015  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:50.793737  366919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:50.890997  366919 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:43:50.896274  366919 fix.go:56] duration metric: took 4.830055131s for fixHost
	I1018 09:43:50.896298  366919 start.go:83] releasing machines lock for "no-preload-589869", held for 4.830101526s
	I1018 09:43:50.896361  366919 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-589869
	I1018 09:43:50.914781  366919 ssh_runner.go:195] Run: cat /version.json
	I1018 09:43:50.914850  366919 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:43:50.914857  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:50.914918  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:50.935573  366919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:50.936201  366919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:51.083096  366919 ssh_runner.go:195] Run: systemctl --version
	I1018 09:43:51.089737  366919 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:43:51.124844  366919 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:43:51.129934  366919 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:43:51.130009  366919 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:43:51.137935  366919 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:43:51.137955  366919 start.go:495] detecting cgroup driver to use...
	I1018 09:43:51.137984  366919 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:43:51.138017  366919 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:43:51.152013  366919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:43:51.163809  366919 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:43:51.163894  366919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:43:51.178003  366919 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:43:51.189980  366919 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:43:51.271384  366919 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:43:51.351239  366919 docker.go:234] disabling docker service ...
	I1018 09:43:51.351297  366919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:43:51.365328  366919 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:43:51.377371  366919 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:43:51.457431  366919 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:43:51.538995  366919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:43:51.551761  366919 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:43:51.565859  366919 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:43:51.565923  366919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:43:51.574929  366919 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:43:51.574983  366919 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:43:51.583790  366919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:43:51.592540  366919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:43:51.602194  366919 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:43:51.610539  366919 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:43:51.619364  366919 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:43:51.627930  366919 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:43:51.637014  366919 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:43:51.644621  366919 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:43:51.652954  366919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:43:51.732572  366919 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:43:51.846566  366919 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:43:51.846634  366919 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:43:51.850912  366919 start.go:563] Will wait 60s for crictl version
	I1018 09:43:51.850994  366919 ssh_runner.go:195] Run: which crictl
	I1018 09:43:51.855054  366919 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:43:51.879985  366919 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:43:51.880070  366919 ssh_runner.go:195] Run: crio --version
	I1018 09:43:51.908200  366919 ssh_runner.go:195] Run: crio --version
	I1018 09:43:51.937497  366919 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:43:51.938545  366919 cli_runner.go:164] Run: docker network inspect no-preload-589869 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:43:51.956396  366919 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 09:43:51.960716  366919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:43:51.971090  366919 kubeadm.go:883] updating cluster {Name:no-preload-589869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:43:51.971196  366919 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:43:51.971246  366919 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:43:52.006147  366919 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:43:52.006169  366919 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:43:52.006176  366919 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1018 09:43:52.006263  366919 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-589869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:43:52.006320  366919 ssh_runner.go:195] Run: crio config
	I1018 09:43:52.055879  366919 cni.go:84] Creating CNI manager for ""
	I1018 09:43:52.055905  366919 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:43:52.055926  366919 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:43:52.055955  366919 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-589869 NodeName:no-preload-589869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:43:52.056121  366919 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-589869"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:43:52.056185  366919 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:43:52.066083  366919 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:43:52.066166  366919 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:43:52.074400  366919 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1018 09:43:52.087017  366919 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:43:52.100911  366919 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1018 09:43:52.114448  366919 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:43:52.118293  366919 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:43:52.128564  366919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:43:52.216446  366919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:43:52.244372  366919 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869 for IP: 192.168.94.2
	I1018 09:43:52.244396  366919 certs.go:195] generating shared ca certs ...
	I1018 09:43:52.244411  366919 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:43:52.244575  366919 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:43:52.244647  366919 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:43:52.244662  366919 certs.go:257] generating profile certs ...
	I1018 09:43:52.244781  366919 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.key
	I1018 09:43:52.244891  366919 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.key.3d5af95d
	I1018 09:43:52.244955  366919 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.key
	I1018 09:43:52.245131  366919 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:43:52.245169  366919 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:43:52.245184  366919 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:43:52.245219  366919 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:43:52.245259  366919 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:43:52.245293  366919 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:43:52.245346  366919 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:43:52.246039  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:43:52.266549  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:43:52.285975  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:43:52.307060  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:43:52.331182  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:43:52.352215  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:43:52.370064  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:43:52.387403  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1018 09:43:52.405029  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:43:52.423936  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:43:52.444459  366919 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:43:52.463403  366919 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:43:52.476687  366919 ssh_runner.go:195] Run: openssl version
	I1018 09:43:52.484000  366919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:43:52.492608  366919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:43:52.496517  366919 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:43:52.496584  366919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:43:52.535913  366919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:43:52.544148  366919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:43:52.552802  366919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:43:52.556563  366919 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:43:52.556626  366919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:43:52.594448  366919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:43:52.604150  366919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:43:52.614424  366919 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:43:52.618359  366919 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:43:52.618412  366919 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:43:52.654528  366919 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:43:52.663177  366919 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:43:52.667296  366919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:43:52.701931  366919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:43:52.738849  366919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:43:52.786090  366919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:43:52.832096  366919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:43:52.886538  366919 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:43:52.930366  366919 kubeadm.go:400] StartCluster: {Name:no-preload-589869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-589869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:43:52.930448  366919 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:43:52.930513  366919 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:43:52.960751  366919 cri.go:89] found id: "8ea25fde146e8e96a504e7f7eaa6b8b321ddf9a82560cd19db582cffc49f48b2"
	I1018 09:43:52.960776  366919 cri.go:89] found id: "e90a7d734d6758358dd228647088e66ec6aa6cfda7ad58d83ee3410a00ea8756"
	I1018 09:43:52.960782  366919 cri.go:89] found id: "3021ebf25ee25e7930a9131ff2cdf54e1638a03d0f603c0186e607f5bf6ea827"
	I1018 09:43:52.960786  366919 cri.go:89] found id: "365f44dae4ed2d0a509b24fe9019127a3b886f7165b236fedf613caabda9e161"
	I1018 09:43:52.960790  366919 cri.go:89] found id: ""
	I1018 09:43:52.960849  366919 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:43:52.973139  366919 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:43:52Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:43:52.973210  366919 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:43:52.982376  366919 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:43:52.982400  366919 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:43:52.982452  366919 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:43:52.989996  366919 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:43:52.990697  366919 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-589869" does not appear in /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:43:52.991284  366919 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-131066/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-589869" cluster setting kubeconfig missing "no-preload-589869" context setting]
	I1018 09:43:52.992029  366919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:43:52.993466  366919 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:43:53.001092  366919 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1018 09:43:53.001124  366919 kubeadm.go:601] duration metric: took 18.716954ms to restartPrimaryControlPlane
	I1018 09:43:53.001134  366919 kubeadm.go:402] duration metric: took 70.776761ms to StartCluster
	I1018 09:43:53.001153  366919 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:43:53.001219  366919 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:43:53.002442  366919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:43:53.002663  366919 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:43:53.002721  366919 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:43:53.002856  366919 addons.go:69] Setting storage-provisioner=true in profile "no-preload-589869"
	I1018 09:43:53.002882  366919 addons.go:238] Setting addon storage-provisioner=true in "no-preload-589869"
	W1018 09:43:53.002894  366919 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:43:53.002887  366919 addons.go:69] Setting dashboard=true in profile "no-preload-589869"
	I1018 09:43:53.002901  366919 addons.go:69] Setting default-storageclass=true in profile "no-preload-589869"
	I1018 09:43:53.002923  366919 host.go:66] Checking if "no-preload-589869" exists ...
	I1018 09:43:53.002926  366919 config.go:182] Loaded profile config "no-preload-589869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:43:53.002939  366919 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-589869"
	I1018 09:43:53.002918  366919 addons.go:238] Setting addon dashboard=true in "no-preload-589869"
	W1018 09:43:53.002984  366919 addons.go:247] addon dashboard should already be in state true
	I1018 09:43:53.003005  366919 host.go:66] Checking if "no-preload-589869" exists ...
	I1018 09:43:53.003292  366919 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:53.003407  366919 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:53.003414  366919 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:53.005729  366919 out.go:179] * Verifying Kubernetes components...
	I1018 09:43:53.008648  366919 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:43:53.029317  366919 addons.go:238] Setting addon default-storageclass=true in "no-preload-589869"
	W1018 09:43:53.029345  366919 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:43:53.029377  366919 host.go:66] Checking if "no-preload-589869" exists ...
	I1018 09:43:53.029962  366919 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:43:53.032926  366919 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:43:53.032997  366919 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:43:53.033816  366919 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:43:53.033856  366919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:43:53.033920  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:53.035675  366919 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1018 09:43:48.598741  364574 pod_ready.go:104] pod "coredns-5dd5756b68-wklp4" is not "Ready", error: node "old-k8s-version-619885" hosting pod "coredns-5dd5756b68-wklp4" is not "Ready" (will retry)
	W1018 09:43:50.599471  364574 pod_ready.go:104] pod "coredns-5dd5756b68-wklp4" is not "Ready", error: node "old-k8s-version-619885" hosting pod "coredns-5dd5756b68-wklp4" is not "Ready" (will retry)
	W1018 09:43:52.599855  364574 pod_ready.go:104] pod "coredns-5dd5756b68-wklp4" is not "Ready", error: node "old-k8s-version-619885" hosting pod "coredns-5dd5756b68-wklp4" is not "Ready" (will retry)
	I1018 09:43:48.875056  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:48.875508  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:48.875558  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:43:48.875611  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:43:48.902616  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:48.902648  353123 cri.go:89] found id: "bf3494d5bef6582646cb9fd62b020501a179e17ec4ae80baaa34921388e4d2c9"
	I1018 09:43:48.902654  353123 cri.go:89] found id: ""
	I1018 09:43:48.902663  353123 logs.go:282] 2 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a bf3494d5bef6582646cb9fd62b020501a179e17ec4ae80baaa34921388e4d2c9]
	I1018 09:43:48.902720  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:48.906791  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:48.910428  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:43:48.910483  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:43:48.937054  353123 cri.go:89] found id: ""
	I1018 09:43:48.937077  353123 logs.go:282] 0 containers: []
	W1018 09:43:48.937087  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:43:48.937094  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:43:48.937156  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:43:48.963525  353123 cri.go:89] found id: ""
	I1018 09:43:48.963551  353123 logs.go:282] 0 containers: []
	W1018 09:43:48.963563  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:43:48.963571  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:43:48.963637  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:43:48.989538  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:43:48.989558  353123 cri.go:89] found id: ""
	I1018 09:43:48.989577  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:43:48.989637  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:48.993391  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:43:48.993471  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:43:49.019697  353123 cri.go:89] found id: ""
	I1018 09:43:49.019720  353123 logs.go:282] 0 containers: []
	W1018 09:43:49.019728  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:43:49.019734  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:43:49.019793  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:43:49.045695  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:43:49.045715  353123 cri.go:89] found id: "dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18"
	I1018 09:43:49.045720  353123 cri.go:89] found id: ""
	I1018 09:43:49.045727  353123 logs.go:282] 2 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7 dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18]
	I1018 09:43:49.045773  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:49.049954  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:49.053609  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:43:49.053671  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:43:49.079477  353123 cri.go:89] found id: ""
	I1018 09:43:49.079504  353123 logs.go:282] 0 containers: []
	W1018 09:43:49.079515  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:43:49.079522  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:43:49.079569  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:43:49.106640  353123 cri.go:89] found id: ""
	I1018 09:43:49.106663  353123 logs.go:282] 0 containers: []
	W1018 09:43:49.106670  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:43:49.106685  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:43:49.106695  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:43:49.168459  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:43:49.168493  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:43:49.223579  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:43:49.223611  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:43:49.223627  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:49.255960  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:43:49.255988  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:43:49.297850  353123 logs.go:123] Gathering logs for kube-controller-manager [dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18] ...
	I1018 09:43:49.297879  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18"
	I1018 09:43:49.325174  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:43:49.325204  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:43:49.343369  353123 logs.go:123] Gathering logs for kube-apiserver [bf3494d5bef6582646cb9fd62b020501a179e17ec4ae80baaa34921388e4d2c9] ...
	I1018 09:43:49.343399  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 bf3494d5bef6582646cb9fd62b020501a179e17ec4ae80baaa34921388e4d2c9"
	I1018 09:43:49.374840  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:43:49.374873  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:43:49.401680  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:43:49.401705  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:43:49.452420  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:43:49.452452  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:43:51.983883  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:51.984247  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:51.984295  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:43:51.984342  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:43:52.013861  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:52.013885  353123 cri.go:89] found id: ""
	I1018 09:43:52.013895  353123 logs.go:282] 1 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a]
	I1018 09:43:52.013972  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:52.018080  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:43:52.018145  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:43:52.046611  353123 cri.go:89] found id: ""
	I1018 09:43:52.046640  353123 logs.go:282] 0 containers: []
	W1018 09:43:52.046651  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:43:52.046659  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:43:52.046715  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:43:52.076555  353123 cri.go:89] found id: ""
	I1018 09:43:52.076577  353123 logs.go:282] 0 containers: []
	W1018 09:43:52.076584  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:43:52.076590  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:43:52.076654  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:43:52.104654  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:43:52.104676  353123 cri.go:89] found id: ""
	I1018 09:43:52.104684  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:43:52.104742  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:52.108491  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:43:52.108546  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:43:52.135425  353123 cri.go:89] found id: ""
	I1018 09:43:52.135453  353123 logs.go:282] 0 containers: []
	W1018 09:43:52.135463  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:43:52.135471  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:43:52.135525  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:43:52.167838  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:43:52.167873  353123 cri.go:89] found id: "dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18"
	I1018 09:43:52.167880  353123 cri.go:89] found id: ""
	I1018 09:43:52.167893  353123 logs.go:282] 2 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7 dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18]
	I1018 09:43:52.167961  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:52.172042  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:52.175551  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:43:52.175639  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:43:52.202083  353123 cri.go:89] found id: ""
	I1018 09:43:52.202111  353123 logs.go:282] 0 containers: []
	W1018 09:43:52.202121  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:43:52.202129  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:43:52.202188  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:43:52.229647  353123 cri.go:89] found id: ""
	I1018 09:43:52.229679  353123 logs.go:282] 0 containers: []
	W1018 09:43:52.229698  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:43:52.229717  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:43:52.229733  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:52.266887  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:43:52.266924  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:43:52.301334  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:43:52.301368  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:43:52.385415  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:43:52.385451  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:43:52.406129  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:43:52.406164  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:43:52.453657  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:43:52.453695  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:43:52.482385  353123 logs.go:123] Gathering logs for kube-controller-manager [dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18] ...
	I1018 09:43:52.482407  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18"
	I1018 09:43:52.509024  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:43:52.509052  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:43:52.554701  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:43:52.554725  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:43:52.614373  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:43:53.036676  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:43:53.036699  366919 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:43:53.036791  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:53.063982  366919 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:43:53.064007  366919 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:43:53.064086  366919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:43:53.071330  366919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:53.073249  366919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:53.093297  366919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:43:53.158365  366919 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:43:53.170898  366919 node_ready.go:35] waiting up to 6m0s for node "no-preload-589869" to be "Ready" ...
	I1018 09:43:53.185397  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:43:53.185445  366919 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:43:53.186368  366919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:43:53.199966  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:43:53.199990  366919 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:43:53.204302  366919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:43:53.215788  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:43:53.215813  366919 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:43:53.231349  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:43:53.231375  366919 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:43:53.250124  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:43:53.250150  366919 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:43:53.267779  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:43:53.267812  366919 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:43:53.281749  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:43:53.281778  366919 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:43:53.294674  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:43:53.294701  366919 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:43:53.307138  366919 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:43:53.307164  366919 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:43:53.319525  366919 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:43:54.972861  366919 node_ready.go:49] node "no-preload-589869" is "Ready"
	I1018 09:43:54.972895  366919 node_ready.go:38] duration metric: took 1.801958248s for node "no-preload-589869" to be "Ready" ...
	I1018 09:43:54.972914  366919 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:43:54.972971  366919 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:43:55.563137  366919 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.376728205s)
	I1018 09:43:55.563243  366919 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.358900953s)
	I1018 09:43:55.563345  366919 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.243780483s)
	I1018 09:43:55.563367  366919 api_server.go:72] duration metric: took 2.560676225s to wait for apiserver process to appear ...
	I1018 09:43:55.563381  366919 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:43:55.563409  366919 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 09:43:55.565046  366919 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-589869 addons enable metrics-server
	
	I1018 09:43:55.568344  366919 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:43:55.568376  366919 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:43:55.570502  366919 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 09:43:55.573912  366919 addons.go:514] duration metric: took 2.571190944s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	W1018 09:43:55.099487  364574 pod_ready.go:104] pod "coredns-5dd5756b68-wklp4" is not "Ready", error: <nil>
	W1018 09:43:57.600137  364574 pod_ready.go:104] pod "coredns-5dd5756b68-wklp4" is not "Ready", error: <nil>
	I1018 09:43:55.114551  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:55.115085  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:55.115143  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:43:55.115192  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:43:55.161219  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:55.161247  353123 cri.go:89] found id: ""
	I1018 09:43:55.161257  353123 logs.go:282] 1 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a]
	I1018 09:43:55.161324  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:55.170224  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:43:55.170305  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:43:55.200890  353123 cri.go:89] found id: ""
	I1018 09:43:55.200918  353123 logs.go:282] 0 containers: []
	W1018 09:43:55.200928  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:43:55.200935  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:43:55.200980  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:43:55.229782  353123 cri.go:89] found id: ""
	I1018 09:43:55.229812  353123 logs.go:282] 0 containers: []
	W1018 09:43:55.229842  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:43:55.229850  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:43:55.229912  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:43:55.261593  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:43:55.261618  353123 cri.go:89] found id: ""
	I1018 09:43:55.261629  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:43:55.261690  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:55.266623  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:43:55.266698  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:43:55.298111  353123 cri.go:89] found id: ""
	I1018 09:43:55.298141  353123 logs.go:282] 0 containers: []
	W1018 09:43:55.298151  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:43:55.298160  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:43:55.298226  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:43:55.331797  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:43:55.331830  353123 cri.go:89] found id: "dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18"
	I1018 09:43:55.331837  353123 cri.go:89] found id: ""
	I1018 09:43:55.331846  353123 logs.go:282] 2 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7 dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18]
	I1018 09:43:55.331924  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:55.337277  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:55.342655  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:43:55.342725  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:43:55.381359  353123 cri.go:89] found id: ""
	I1018 09:43:55.381577  353123 logs.go:282] 0 containers: []
	W1018 09:43:55.381598  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:43:55.381609  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:43:55.381768  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:43:55.426587  353123 cri.go:89] found id: ""
	I1018 09:43:55.426617  353123 logs.go:282] 0 containers: []
	W1018 09:43:55.426628  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:43:55.426696  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:43:55.426715  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:43:55.516649  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:43:55.516686  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:43:55.537369  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:43:55.537405  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:43:55.602370  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:43:55.602388  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:43:55.602404  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:43:55.630263  353123 logs.go:123] Gathering logs for kube-controller-manager [dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18] ...
	I1018 09:43:55.630305  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18"
	I1018 09:43:55.674395  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:43:55.674438  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:55.727344  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:43:55.727436  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:43:55.783819  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:43:55.783881  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:43:55.833768  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:43:55.833806  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:43:58.367561  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:43:58.368111  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:43:58.368174  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:43:58.368237  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:43:58.404599  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:58.404624  353123 cri.go:89] found id: ""
	I1018 09:43:58.404635  353123 logs.go:282] 1 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a]
	I1018 09:43:58.404700  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:58.409553  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:43:58.409635  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:43:58.444727  353123 cri.go:89] found id: ""
	I1018 09:43:58.444757  353123 logs.go:282] 0 containers: []
	W1018 09:43:58.444769  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:43:58.444779  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:43:58.444877  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:43:58.477665  353123 cri.go:89] found id: ""
	I1018 09:43:58.477687  353123 logs.go:282] 0 containers: []
	W1018 09:43:58.477695  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:43:58.477702  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:43:58.477748  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:43:58.506961  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:43:58.506987  353123 cri.go:89] found id: ""
	I1018 09:43:58.506998  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:43:58.507061  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:58.512256  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:43:58.512331  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:43:58.545142  353123 cri.go:89] found id: ""
	I1018 09:43:58.545172  353123 logs.go:282] 0 containers: []
	W1018 09:43:58.545183  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:43:58.545191  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:43:58.545258  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:43:58.578900  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:43:58.578928  353123 cri.go:89] found id: "dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18"
	I1018 09:43:58.578934  353123 cri.go:89] found id: ""
	I1018 09:43:58.578944  353123 logs.go:282] 2 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7 dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18]
	I1018 09:43:58.579006  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:58.584607  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:43:58.590709  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:43:58.590932  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:43:58.627132  353123 cri.go:89] found id: ""
	I1018 09:43:58.627158  353123 logs.go:282] 0 containers: []
	W1018 09:43:58.627168  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:43:58.627176  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:43:58.627234  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:43:56.063808  366919 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 09:43:56.069236  366919 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:43:56.069266  366919 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:43:56.563889  366919 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1018 09:43:56.568121  366919 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1018 09:43:56.569093  366919 api_server.go:141] control plane version: v1.34.1
	I1018 09:43:56.569119  366919 api_server.go:131] duration metric: took 1.005724823s to wait for apiserver health ...
	I1018 09:43:56.569128  366919 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:43:56.572026  366919 system_pods.go:59] 8 kube-system pods found
	I1018 09:43:56.572057  366919 system_pods.go:61] "coredns-66bc5c9577-pck54" [602e29ab-ecfb-4629-a801-28c32d870d4a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:56.572067  366919 system_pods.go:61] "etcd-no-preload-589869" [4d5dfb31-d876-4b94-92b6-119124511a9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:43:56.572075  366919 system_pods.go:61] "kindnet-zjqmf" [f9912369-31bd-48e1-b05e-e623a8b4e541] Running
	I1018 09:43:56.572084  366919 system_pods.go:61] "kube-apiserver-no-preload-589869" [2584bf4b-0c8f-41a7-bc9b-06cb402dc7cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:43:56.572091  366919 system_pods.go:61] "kube-controller-manager-no-preload-589869" [52f102ff-416e-4a0f-9ba4-60fca43d533e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:43:56.572098  366919 system_pods.go:61] "kube-proxy-45kpn" [1f457398-f624-4d8b-bb01-66d9f3a15033] Running
	I1018 09:43:56.572106  366919 system_pods.go:61] "kube-scheduler-no-preload-589869" [60a71bc7-82e8-4028-98db-d34384b00875] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:43:56.572115  366919 system_pods.go:61] "storage-provisioner" [9c851a2c-8320-45ae-9c2f-3f60bc0401c8] Running
	I1018 09:43:56.572123  366919 system_pods.go:74] duration metric: took 2.98957ms to wait for pod list to return data ...
	I1018 09:43:56.572134  366919 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:43:56.574301  366919 default_sa.go:45] found service account: "default"
	I1018 09:43:56.574318  366919 default_sa.go:55] duration metric: took 2.177253ms for default service account to be created ...
	I1018 09:43:56.574325  366919 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:43:56.577061  366919 system_pods.go:86] 8 kube-system pods found
	I1018 09:43:56.577086  366919 system_pods.go:89] "coredns-66bc5c9577-pck54" [602e29ab-ecfb-4629-a801-28c32d870d4a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:43:56.577093  366919 system_pods.go:89] "etcd-no-preload-589869" [4d5dfb31-d876-4b94-92b6-119124511a9a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:43:56.577108  366919 system_pods.go:89] "kindnet-zjqmf" [f9912369-31bd-48e1-b05e-e623a8b4e541] Running
	I1018 09:43:56.577117  366919 system_pods.go:89] "kube-apiserver-no-preload-589869" [2584bf4b-0c8f-41a7-bc9b-06cb402dc7cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:43:56.577128  366919 system_pods.go:89] "kube-controller-manager-no-preload-589869" [52f102ff-416e-4a0f-9ba4-60fca43d533e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:43:56.577134  366919 system_pods.go:89] "kube-proxy-45kpn" [1f457398-f624-4d8b-bb01-66d9f3a15033] Running
	I1018 09:43:56.577140  366919 system_pods.go:89] "kube-scheduler-no-preload-589869" [60a71bc7-82e8-4028-98db-d34384b00875] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:43:56.577145  366919 system_pods.go:89] "storage-provisioner" [9c851a2c-8320-45ae-9c2f-3f60bc0401c8] Running
	I1018 09:43:56.577151  366919 system_pods.go:126] duration metric: took 2.821656ms to wait for k8s-apps to be running ...
	I1018 09:43:56.577160  366919 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:43:56.577201  366919 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:43:56.590429  366919 system_svc.go:56] duration metric: took 13.258132ms WaitForService to wait for kubelet
	I1018 09:43:56.590454  366919 kubeadm.go:586] duration metric: took 3.587767635s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:43:56.590470  366919 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:43:56.593275  366919 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:43:56.593309  366919 node_conditions.go:123] node cpu capacity is 8
	I1018 09:43:56.593327  366919 node_conditions.go:105] duration metric: took 2.852019ms to run NodePressure ...
	I1018 09:43:56.593344  366919 start.go:241] waiting for startup goroutines ...
	I1018 09:43:56.593358  366919 start.go:246] waiting for cluster config update ...
	I1018 09:43:56.593376  366919 start.go:255] writing updated cluster config ...
	I1018 09:43:56.593687  366919 ssh_runner.go:195] Run: rm -f paused
	I1018 09:43:56.597912  366919 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:43:56.601361  366919 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pck54" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:43:58.609872  366919 pod_ready.go:104] pod "coredns-66bc5c9577-pck54" is not "Ready", error: <nil>
	W1018 09:44:00.610618  366919 pod_ready.go:104] pod "coredns-66bc5c9577-pck54" is not "Ready", error: <nil>
	W1018 09:43:59.600964  364574 pod_ready.go:104] pod "coredns-5dd5756b68-wklp4" is not "Ready", error: <nil>
	I1018 09:44:01.101049  364574 pod_ready.go:94] pod "coredns-5dd5756b68-wklp4" is "Ready"
	I1018 09:44:01.101082  364574 pod_ready.go:86] duration metric: took 17.007633554s for pod "coredns-5dd5756b68-wklp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:01.105018  364574 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:01.111217  364574 pod_ready.go:94] pod "etcd-old-k8s-version-619885" is "Ready"
	I1018 09:44:01.111243  364574 pod_ready.go:86] duration metric: took 6.201206ms for pod "etcd-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:01.117066  364574 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:01.122981  364574 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-619885" is "Ready"
	I1018 09:44:01.123002  364574 pod_ready.go:86] duration metric: took 5.915488ms for pod "kube-apiserver-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:01.126752  364574 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:01.298102  364574 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-619885" is "Ready"
	I1018 09:44:01.298145  364574 pod_ready.go:86] duration metric: took 171.370267ms for pod "kube-controller-manager-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:01.498818  364574 pod_ready.go:83] waiting for pod "kube-proxy-spkr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:01.898035  364574 pod_ready.go:94] pod "kube-proxy-spkr8" is "Ready"
	I1018 09:44:01.898066  364574 pod_ready.go:86] duration metric: took 399.178015ms for pod "kube-proxy-spkr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:02.098403  364574 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:02.496992  364574 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-619885" is "Ready"
	I1018 09:44:02.497018  364574 pod_ready.go:86] duration metric: took 398.590697ms for pod "kube-scheduler-old-k8s-version-619885" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:44:02.497030  364574 pod_ready.go:40] duration metric: took 18.40808647s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:44:02.546419  364574 start.go:624] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1018 09:44:02.551194  364574 out.go:203] 
	W1018 09:44:02.552350  364574 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1018 09:44:02.553351  364574 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1018 09:44:02.554373  364574 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-619885" cluster and "default" namespace by default
	I1018 09:43:58.663618  353123 cri.go:89] found id: ""
	I1018 09:43:58.663648  353123 logs.go:282] 0 containers: []
	W1018 09:43:58.663659  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:43:58.663715  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:43:58.663738  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:43:58.739942  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:43:58.739966  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:43:58.739982  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:43:58.783522  353123 logs.go:123] Gathering logs for kube-controller-manager [dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18] ...
	I1018 09:43:58.783569  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc68021843bee721862ef1b94b2d3143e3d79563888b6003d0f9ddc2e0db9d18"
	I1018 09:43:58.821427  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:43:58.821460  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:43:58.926128  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:43:58.926216  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:43:58.955326  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:43:58.955412  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:43:59.018958  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:43:59.019007  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:43:59.054651  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:43:59.054684  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:43:59.118884  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:43:59.118927  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:01.659487  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:01.659919  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:01.659991  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:01.660080  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:01.694753  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:01.694779  353123 cri.go:89] found id: ""
	I1018 09:44:01.694789  353123 logs.go:282] 1 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a]
	I1018 09:44:01.694885  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:01.700222  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:01.700310  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:01.737639  353123 cri.go:89] found id: ""
	I1018 09:44:01.737666  353123 logs.go:282] 0 containers: []
	W1018 09:44:01.737676  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:01.737683  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:01.737744  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:01.771464  353123 cri.go:89] found id: ""
	I1018 09:44:01.771495  353123 logs.go:282] 0 containers: []
	W1018 09:44:01.771507  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:01.771515  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:01.771601  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:01.808752  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:01.808783  353123 cri.go:89] found id: ""
	I1018 09:44:01.808796  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:01.808895  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:01.813969  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:01.814051  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:01.850775  353123 cri.go:89] found id: ""
	I1018 09:44:01.850811  353123 logs.go:282] 0 containers: []
	W1018 09:44:01.850838  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:01.850847  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:01.850918  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:01.886907  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:01.886933  353123 cri.go:89] found id: ""
	I1018 09:44:01.886944  353123 logs.go:282] 1 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7]
	I1018 09:44:01.887011  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:01.891964  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:01.892033  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:01.926001  353123 cri.go:89] found id: ""
	I1018 09:44:01.926029  353123 logs.go:282] 0 containers: []
	W1018 09:44:01.926053  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:01.926061  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:01.926285  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:01.965174  353123 cri.go:89] found id: ""
	I1018 09:44:01.965205  353123 logs.go:282] 0 containers: []
	W1018 09:44:01.965216  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:01.965227  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:01.965242  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:02.028887  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:02.028924  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:02.067308  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:02.067361  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:02.171934  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:02.171971  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:02.198336  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:02.198372  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:02.268275  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:02.268297  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:44:02.268316  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:02.304755  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:02.304789  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:02.357657  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:44:02.357692  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	W1018 09:44:03.108126  366919 pod_ready.go:104] pod "coredns-66bc5c9577-pck54" is not "Ready", error: <nil>
	W1018 09:44:05.108347  366919 pod_ready.go:104] pod "coredns-66bc5c9577-pck54" is not "Ready", error: <nil>
	I1018 09:44:04.887479  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:04.887937  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:04.888003  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:04.888064  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:04.924175  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:04.924199  353123 cri.go:89] found id: ""
	I1018 09:44:04.924210  353123 logs.go:282] 1 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a]
	I1018 09:44:04.924268  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:04.929146  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:04.929224  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:04.963702  353123 cri.go:89] found id: ""
	I1018 09:44:04.963729  353123 logs.go:282] 0 containers: []
	W1018 09:44:04.963741  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:04.963748  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:04.963806  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:05.000010  353123 cri.go:89] found id: ""
	I1018 09:44:05.000041  353123 logs.go:282] 0 containers: []
	W1018 09:44:05.000052  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:05.000060  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:05.000121  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:05.035523  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:05.035549  353123 cri.go:89] found id: ""
	I1018 09:44:05.035560  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:05.035630  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:05.040903  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:05.040971  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:05.076714  353123 cri.go:89] found id: ""
	I1018 09:44:05.076746  353123 logs.go:282] 0 containers: []
	W1018 09:44:05.076758  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:05.076765  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:05.076856  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:05.112594  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:05.112619  353123 cri.go:89] found id: ""
	I1018 09:44:05.112629  353123 logs.go:282] 1 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7]
	I1018 09:44:05.112694  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:05.117677  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:05.117748  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:05.151934  353123 cri.go:89] found id: ""
	I1018 09:44:05.151962  353123 logs.go:282] 0 containers: []
	W1018 09:44:05.151972  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:05.151980  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:05.152038  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:05.186779  353123 cri.go:89] found id: ""
	I1018 09:44:05.186810  353123 logs.go:282] 0 containers: []
	W1018 09:44:05.186834  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:05.186845  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:44:05.186863  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:05.231206  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:05.231246  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:05.295779  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:44:05.295832  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:05.331030  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:05.331067  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:05.397158  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:05.397194  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:05.428937  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:05.428966  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:05.509640  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:05.509673  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:05.528480  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:05.528507  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:05.593478  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:08.095101  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:08.095520  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:08.095579  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:08.095636  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:08.124614  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:08.124632  353123 cri.go:89] found id: ""
	I1018 09:44:08.124640  353123 logs.go:282] 1 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a]
	I1018 09:44:08.124693  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:08.128666  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:08.128725  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:08.154936  353123 cri.go:89] found id: ""
	I1018 09:44:08.154965  353123 logs.go:282] 0 containers: []
	W1018 09:44:08.154976  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:08.154985  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:08.155052  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:08.180690  353123 cri.go:89] found id: ""
	I1018 09:44:08.180714  353123 logs.go:282] 0 containers: []
	W1018 09:44:08.180724  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:08.180732  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:08.180789  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:08.206537  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:08.206558  353123 cri.go:89] found id: ""
	I1018 09:44:08.206568  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:08.206629  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:08.210512  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:08.210571  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:08.235865  353123 cri.go:89] found id: ""
	I1018 09:44:08.235889  353123 logs.go:282] 0 containers: []
	W1018 09:44:08.235897  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:08.235904  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:08.235959  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:08.262042  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:08.262064  353123 cri.go:89] found id: ""
	I1018 09:44:08.262073  353123 logs.go:282] 1 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7]
	I1018 09:44:08.262131  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:08.265937  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:08.265992  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:08.291624  353123 cri.go:89] found id: ""
	I1018 09:44:08.291651  353123 logs.go:282] 0 containers: []
	W1018 09:44:08.291660  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:08.291666  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:08.291714  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:08.318553  353123 cri.go:89] found id: ""
	I1018 09:44:08.318582  353123 logs.go:282] 0 containers: []
	W1018 09:44:08.318592  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:08.318601  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:08.318624  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:08.337532  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:08.337561  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:08.393037  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:08.393059  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:44:08.393074  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:08.427614  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:08.427645  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:08.474784  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:44:08.474828  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:08.501654  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:08.501682  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:08.546229  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:08.546263  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:08.576106  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:08.576135  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1018 09:44:07.606026  366919 pod_ready.go:104] pod "coredns-66bc5c9577-pck54" is not "Ready", error: <nil>
	W1018 09:44:09.606923  366919 pod_ready.go:104] pod "coredns-66bc5c9577-pck54" is not "Ready", error: <nil>
	I1018 09:44:11.149661  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:11.150103  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:11.150151  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:11.150205  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:11.176524  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:11.176551  353123 cri.go:89] found id: ""
	I1018 09:44:11.176562  353123 logs.go:282] 1 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a]
	I1018 09:44:11.176621  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:11.180677  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:11.180746  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:11.206839  353123 cri.go:89] found id: ""
	I1018 09:44:11.206865  353123 logs.go:282] 0 containers: []
	W1018 09:44:11.206876  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:11.206884  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:11.206935  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:11.232446  353123 cri.go:89] found id: ""
	I1018 09:44:11.232486  353123 logs.go:282] 0 containers: []
	W1018 09:44:11.232498  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:11.232507  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:11.232569  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:11.259690  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:11.259717  353123 cri.go:89] found id: ""
	I1018 09:44:11.259728  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:11.259788  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:11.263862  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:11.263929  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:11.290304  353123 cri.go:89] found id: ""
	I1018 09:44:11.290333  353123 logs.go:282] 0 containers: []
	W1018 09:44:11.290343  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:11.290351  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:11.290415  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:11.317474  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:11.317499  353123 cri.go:89] found id: ""
	I1018 09:44:11.317509  353123 logs.go:282] 1 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7]
	I1018 09:44:11.317563  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:11.321537  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:11.321610  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:11.349912  353123 cri.go:89] found id: ""
	I1018 09:44:11.349943  353123 logs.go:282] 0 containers: []
	W1018 09:44:11.349955  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:11.349964  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:11.350101  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:11.377180  353123 cri.go:89] found id: ""
	I1018 09:44:11.377208  353123 logs.go:282] 0 containers: []
	W1018 09:44:11.377219  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:11.377232  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:11.377255  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:11.421302  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:44:11.421338  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:11.448331  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:11.448356  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:11.494879  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:11.494915  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:11.525200  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:11.525227  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:11.601275  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:11.601309  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:11.620467  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:11.620494  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:11.678481  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:11.678502  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:44:11.678521  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	W1018 09:44:12.106646  366919 pod_ready.go:104] pod "coredns-66bc5c9577-pck54" is not "Ready", error: <nil>
	W1018 09:44:14.106811  366919 pod_ready.go:104] pod "coredns-66bc5c9577-pck54" is not "Ready", error: <nil>
	I1018 09:44:14.211097  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:14.211491  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:14.211548  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:14.211603  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:14.241489  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:14.241514  353123 cri.go:89] found id: ""
	I1018 09:44:14.241522  353123 logs.go:282] 1 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a]
	I1018 09:44:14.241571  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:14.245713  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:14.245762  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:14.272769  353123 cri.go:89] found id: ""
	I1018 09:44:14.272792  353123 logs.go:282] 0 containers: []
	W1018 09:44:14.272800  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:14.272807  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:14.272888  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:14.308199  353123 cri.go:89] found id: ""
	I1018 09:44:14.308228  353123 logs.go:282] 0 containers: []
	W1018 09:44:14.308239  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:14.308247  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:14.308317  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:14.338662  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:14.338683  353123 cri.go:89] found id: ""
	I1018 09:44:14.338691  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:14.338741  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:14.342908  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:14.342977  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:14.370061  353123 cri.go:89] found id: ""
	I1018 09:44:14.370095  353123 logs.go:282] 0 containers: []
	W1018 09:44:14.370104  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:14.370110  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:14.370161  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:14.400050  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:14.400079  353123 cri.go:89] found id: ""
	I1018 09:44:14.400089  353123 logs.go:282] 1 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7]
	I1018 09:44:14.400147  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:14.404125  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:14.404193  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:14.431479  353123 cri.go:89] found id: ""
	I1018 09:44:14.431505  353123 logs.go:282] 0 containers: []
	W1018 09:44:14.431516  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:14.431533  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:14.431603  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:14.458682  353123 cri.go:89] found id: ""
	I1018 09:44:14.458713  353123 logs.go:282] 0 containers: []
	W1018 09:44:14.458726  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:14.458739  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:14.458757  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:14.477891  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:14.477921  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:14.544244  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:14.544270  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:44:14.544297  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:14.577356  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:14.577383  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:14.627197  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:44:14.627232  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:14.657754  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:14.657782  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:14.707647  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:14.707691  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:14.740652  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:14.740685  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:17.323904  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:17.324297  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:17.324346  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:17.324392  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:17.352669  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:17.352691  353123 cri.go:89] found id: ""
	I1018 09:44:17.352701  353123 logs.go:282] 1 containers: [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a]
	I1018 09:44:17.352758  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:17.357198  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:17.357279  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:17.387268  353123 cri.go:89] found id: ""
	I1018 09:44:17.387299  353123 logs.go:282] 0 containers: []
	W1018 09:44:17.387317  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:17.387326  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:17.387400  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:17.416687  353123 cri.go:89] found id: ""
	I1018 09:44:17.416717  353123 logs.go:282] 0 containers: []
	W1018 09:44:17.416727  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:17.416734  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:17.416802  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:17.445762  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:17.445785  353123 cri.go:89] found id: ""
	I1018 09:44:17.445795  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:17.445864  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:17.449868  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:17.449930  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:17.478446  353123 cri.go:89] found id: ""
	I1018 09:44:17.478475  353123 logs.go:282] 0 containers: []
	W1018 09:44:17.478484  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:17.478491  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:17.478543  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:17.507455  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:17.507476  353123 cri.go:89] found id: ""
	I1018 09:44:17.507485  353123 logs.go:282] 1 containers: [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7]
	I1018 09:44:17.507532  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:17.511298  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:17.511356  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:17.539015  353123 cri.go:89] found id: ""
	I1018 09:44:17.539035  353123 logs.go:282] 0 containers: []
	W1018 09:44:17.539042  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:17.539049  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:17.539090  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:17.566297  353123 cri.go:89] found id: ""
	I1018 09:44:17.566323  353123 logs.go:282] 0 containers: []
	W1018 09:44:17.566335  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:17.566347  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:17.566366  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:17.649387  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:17.649434  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:17.671473  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:17.671503  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:17.734494  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:17.734518  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:44:17.734533  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:17.772133  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:17.772175  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:17.819917  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:44:17.819952  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:17.849260  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:17.849288  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:17.894963  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:17.894989  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.604278772Z" level=info msg="Created container e76b86527972d2fcd5547561249cd9984ef49dee338f65322663e5ffad7acafc: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d/dashboard-metrics-scraper" id=e52f3f4c-4f40-4d7b-a55c-29edd30ae6ee name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.604937331Z" level=info msg="Starting container: e76b86527972d2fcd5547561249cd9984ef49dee338f65322663e5ffad7acafc" id=887915b9-2358-4a00-ac87-dba57fb24af2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.60707943Z" level=info msg="Started container" PID=1718 containerID=e76b86527972d2fcd5547561249cd9984ef49dee338f65322663e5ffad7acafc description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d/dashboard-metrics-scraper id=887915b9-2358-4a00-ac87-dba57fb24af2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ceb5ef0e56991cab30400c892ee50ee900dbba37e2ad24b03d4226197441651
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.865636616Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e07379b2-f9dd-49ca-9071-562f1dbadb92 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.868644718Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=95085fa6-7b80-47a5-8871-b91fb0099e4f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.871420011Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d/dashboard-metrics-scraper" id=0afe629b-5101-4fe6-9505-442f3d821404 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.873320763Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.882250793Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.882902469Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.908028362Z" level=info msg="Created container 23f28b00046885ed3722a8258f6cd92c80978d95a4a0c82c1094e9b69cf27e37: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d/dashboard-metrics-scraper" id=0afe629b-5101-4fe6-9505-442f3d821404 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.908610399Z" level=info msg="Starting container: 23f28b00046885ed3722a8258f6cd92c80978d95a4a0c82c1094e9b69cf27e37" id=fd6633bf-d2d2-487f-8ab4-83f767bf7998 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:44:02 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:02.910638483Z" level=info msg="Started container" PID=1747 containerID=23f28b00046885ed3722a8258f6cd92c80978d95a4a0c82c1094e9b69cf27e37 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d/dashboard-metrics-scraper id=fd6633bf-d2d2-487f-8ab4-83f767bf7998 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ceb5ef0e56991cab30400c892ee50ee900dbba37e2ad24b03d4226197441651
	Oct 18 09:44:03 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:03.871897889Z" level=info msg="Removing container: e76b86527972d2fcd5547561249cd9984ef49dee338f65322663e5ffad7acafc" id=963761fe-2cdf-436a-8799-0b6cbcfe5f8f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:44:03 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:03.882932952Z" level=info msg="Removed container e76b86527972d2fcd5547561249cd9984ef49dee338f65322663e5ffad7acafc: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d/dashboard-metrics-scraper" id=963761fe-2cdf-436a-8799-0b6cbcfe5f8f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.895720602Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e98f5802-0d1e-448f-b0bf-5e831e6d40a9 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.896656095Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=04b4183f-62aa-419b-a034-68ea7e025f78 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.897585754Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e91fd285-04a2-4822-993a-10f81840915b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.89788111Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.9032139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.903448944Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b46f0eaf384e6736dee533f3a22d80498dcd4493d52943f6144839a4b63bd7c7/merged/etc/passwd: no such file or directory"
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.903594105Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b46f0eaf384e6736dee533f3a22d80498dcd4493d52943f6144839a4b63bd7c7/merged/etc/group: no such file or directory"
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.904036138Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.928018886Z" level=info msg="Created container 5b9a25d7ca89e5a5f227c89e4c65fda1f57fea58ab2f00baf53866e09b9a19ea: kube-system/storage-provisioner/storage-provisioner" id=e91fd285-04a2-4822-993a-10f81840915b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.928572492Z" level=info msg="Starting container: 5b9a25d7ca89e5a5f227c89e4c65fda1f57fea58ab2f00baf53866e09b9a19ea" id=af481f04-fc67-457e-a941-4ed3c8e0e311 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:44:13 old-k8s-version-619885 crio[561]: time="2025-10-18T09:44:13.930346852Z" level=info msg="Started container" PID=1761 containerID=5b9a25d7ca89e5a5f227c89e4c65fda1f57fea58ab2f00baf53866e09b9a19ea description=kube-system/storage-provisioner/storage-provisioner id=af481f04-fc67-457e-a941-4ed3c8e0e311 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1f262ba63a4f9a3bfc95e4d7eb0e4ad95dec1f73cc8610145db80589932e4821
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	5b9a25d7ca89e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           5 seconds ago       Running             storage-provisioner         1                   1f262ba63a4f9       storage-provisioner                              kube-system
	23f28b0004688       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   1                   7ceb5ef0e5699       dashboard-metrics-scraper-5f989dc9cf-fm56d       kubernetes-dashboard
	2d6a72283c35f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   19 seconds ago      Running             kubernetes-dashboard        0                   a49233e499b89       kubernetes-dashboard-8694d4445c-88pgw            kubernetes-dashboard
	3d71415e5d23f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           28 seconds ago      Running             coredns                     0                   15879cd00e6d7       coredns-5dd5756b68-wklp4                         kube-system
	f97eabcf99d6b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           28 seconds ago      Running             busybox                     1                   094fce3a1dc97       busybox                                          default
	868ad4152848f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           36 seconds ago      Exited              storage-provisioner         0                   1f262ba63a4f9       storage-provisioner                              kube-system
	2d9de25ec275f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           36 seconds ago      Running             kindnet-cni                 0                   5bf812a56d015       kindnet-vpnhf                                    kube-system
	9bac4afda2cd6       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           36 seconds ago      Running             kube-proxy                  0                   1bce143ff28f6       kube-proxy-spkr8                                 kube-system
	7fe7bf854b172       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           39 seconds ago      Running             kube-scheduler              0                   373a1c04046f4       kube-scheduler-old-k8s-version-619885            kube-system
	fdfeb0ddcbc9e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           39 seconds ago      Running             etcd                        0                   adab16a038eaa       etcd-old-k8s-version-619885                      kube-system
	9dea26c3889d8       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           39 seconds ago      Running             kube-controller-manager     0                   2a7cdbf4dfa90       kube-controller-manager-old-k8s-version-619885   kube-system
	c46ec81af1bdf       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           39 seconds ago      Running             kube-apiserver              0                   904a10e46f596       kube-apiserver-old-k8s-version-619885            kube-system
	
	
	==> coredns [3d71415e5d23f091c256ec69cb6bd08bff295fdc3222434e5978054f55cd858a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54024 - 50201 "HINFO IN 1289931151697642964.8890851655498100000. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.067747865s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-619885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-619885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=old-k8s-version-619885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_42_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:42:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-619885
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:44:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:44:13 +0000   Sat, 18 Oct 2025 09:42:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:44:13 +0000   Sat, 18 Oct 2025 09:42:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:44:13 +0000   Sat, 18 Oct 2025 09:42:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:44:13 +0000   Sat, 18 Oct 2025 09:43:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-619885
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                5fe2f0a1-057b-421d-9214-f38cf6889451
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 coredns-5dd5756b68-wklp4                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     90s
	  kube-system                 etcd-old-k8s-version-619885                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         105s
	  kube-system                 kindnet-vpnhf                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      90s
	  kube-system                 kube-apiserver-old-k8s-version-619885             250m (3%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-controller-manager-old-k8s-version-619885    200m (2%)     0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-proxy-spkr8                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-scheduler-old-k8s-version-619885             100m (1%)     0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-fm56d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-88pgw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 89s                kube-proxy       
	  Normal  Starting                 36s                kube-proxy       
	  Normal  Starting                 104s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s               kubelet          Node old-k8s-version-619885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s               kubelet          Node old-k8s-version-619885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s               kubelet          Node old-k8s-version-619885 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           90s                node-controller  Node old-k8s-version-619885 event: Registered Node old-k8s-version-619885 in Controller
	  Normal  NodeReady                77s                kubelet          Node old-k8s-version-619885 status is now: NodeReady
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node old-k8s-version-619885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node old-k8s-version-619885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x8 over 40s)  kubelet          Node old-k8s-version-619885 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s                node-controller  Node old-k8s-version-619885 event: Registered Node old-k8s-version-619885 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [fdfeb0ddcbc9e81818edeaac2428def9a1bd1e558ad4e23f0d8f6775b7f2c5b9] <==
	{"level":"info","ts":"2025-10-18T09:43:40.345896Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-18T09:43:40.346047Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:43:40.345061Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T09:43:40.346606Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T09:43:40.346629Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-18T09:43:40.346717Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-18T09:43:40.34891Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-18T09:43:40.349126Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-18T09:43:40.349158Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-18T09:43:40.349188Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T09:43:40.349199Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-18T09:43:41.637635Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-18T09:43:41.63768Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-18T09:43:41.637727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-18T09:43:41.637746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-18T09:43:41.637754Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-18T09:43:41.637767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-18T09:43:41.637779Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-18T09:43:41.63874Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-619885 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-18T09:43:41.638749Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:43:41.638779Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-18T09:43:41.639089Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-18T09:43:41.63911Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-18T09:43:41.640031Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-18T09:43:41.640068Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:44:19 up  1:26,  0 user,  load average: 2.38, 2.84, 1.79
	Linux old-k8s-version-619885 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2d9de25ec275f7a26f89e18a6bf459fac123effa83d7ee72e4855d9b3bd71070] <==
	I1018 09:43:43.304283       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:43:43.304501       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:43:43.304628       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:43:43.304648       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:43:43.304671       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:43:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:43:43.591337       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:43:43.591391       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:43:43.591402       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:43:43.591568       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:43:43.900019       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:43:43.900313       1 metrics.go:72] Registering metrics
	I1018 09:43:43.900395       1 controller.go:711] "Syncing nftables rules"
	I1018 09:43:53.592006       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:43:53.592085       1 main.go:301] handling current node
	I1018 09:44:03.591879       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:44:03.591908       1 main.go:301] handling current node
	I1018 09:44:13.591245       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:44:13.591271       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c46ec81af1bdf64d24ba9e436aeaa90b9063672e95d2002dd2a2ea63c5994da3] <==
	I1018 09:43:42.615176       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1018 09:43:42.615179       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1018 09:43:42.615326       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1018 09:43:42.615383       1 aggregator.go:166] initial CRD sync complete...
	I1018 09:43:42.615392       1 autoregister_controller.go:141] Starting autoregister controller
	I1018 09:43:42.615397       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:43:42.615404       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:43:42.615711       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1018 09:43:42.615753       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1018 09:43:42.615897       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1018 09:43:42.616194       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	I1018 09:43:43.435704       1 controller.go:624] quota admission added evaluator for: namespaces
	I1018 09:43:43.469578       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1018 09:43:43.486780       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:43:43.494256       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:43:43.500540       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1018 09:43:43.517125       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:43:43.533160       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.208.17"}
	I1018 09:43:43.546668       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.120.167"}
	E1018 09:43:52.616906       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	I1018 09:43:55.546180       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:43:55.648985       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1018 09:43:55.710514       1 controller.go:624] quota admission added evaluator for: endpoints
	E1018 09:44:02.617462       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E1018 09:44:12.618368       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [9dea26c3889d8fcde9ef123c494d3c45546f1760d8a72398c746eda2f2f6395b] <==
	I1018 09:43:55.451471       1 shared_informer.go:318] Caches are synced for resource quota
	I1018 09:43:55.655529       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1018 09:43:55.657712       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1018 09:43:55.669343       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-88pgw"
	I1018 09:43:55.670489       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-fm56d"
	I1018 09:43:55.682319       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="25.954478ms"
	I1018 09:43:55.682452       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="27.420465ms"
	I1018 09:43:55.692942       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.38267ms"
	I1018 09:43:55.693157       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="71.439µs"
	I1018 09:43:55.701965       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="19.582805ms"
	I1018 09:43:55.702050       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="50.149µs"
	I1018 09:43:55.707429       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="323.531µs"
	I1018 09:43:55.724102       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="87.8µs"
	I1018 09:43:55.726611       1 event.go:307] "Event occurred" object="kubernetes-dashboard" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/kubernetes-dashboard: endpoints \"kubernetes-dashboard\" already exists"
	I1018 09:43:55.726652       1 event.go:307] "Event occurred" object="dashboard-metrics-scraper" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kubernetes-dashboard/dashboard-metrics-scraper: endpoints \"dashboard-metrics-scraper\" already exists"
	I1018 09:43:55.770784       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:43:55.819743       1 shared_informer.go:318] Caches are synced for garbage collector
	I1018 09:43:55.819784       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1018 09:44:00.727141       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.955228ms"
	I1018 09:44:00.727299       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.552µs"
	I1018 09:44:00.887880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.732786ms"
	I1018 09:44:00.888197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.427µs"
	I1018 09:44:02.876172       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="92.366µs"
	I1018 09:44:03.883302       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="131.947µs"
	I1018 09:44:04.887105       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="95.445µs"
	
	
	==> kube-proxy [9bac4afda2cd6a56903403041cc289b1df6e5601dec28bc97ecdf4758352ef1f] <==
	I1018 09:43:43.190987       1 server_others.go:69] "Using iptables proxy"
	I1018 09:43:43.201540       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1018 09:43:43.223568       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:43:43.225907       1 server_others.go:152] "Using iptables Proxier"
	I1018 09:43:43.225989       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1018 09:43:43.226018       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1018 09:43:43.226081       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1018 09:43:43.226447       1 server.go:846] "Version info" version="v1.28.0"
	I1018 09:43:43.226503       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:43:43.228920       1 config.go:315] "Starting node config controller"
	I1018 09:43:43.228952       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1018 09:43:43.228939       1 config.go:188] "Starting service config controller"
	I1018 09:43:43.228982       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1018 09:43:43.229244       1 config.go:97] "Starting endpoint slice config controller"
	I1018 09:43:43.229259       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1018 09:43:43.330039       1 shared_informer.go:318] Caches are synced for service config
	I1018 09:43:43.330117       1 shared_informer.go:318] Caches are synced for node config
	I1018 09:43:43.330153       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7fe7bf854b17230485448f3f9edffbf8256278410beebb814098460ced51012a] <==
	E1018 09:43:42.595004       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1018 09:43:42.595008       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1018 09:43:42.595006       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1018 09:43:42.594966       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1018 09:43:42.595060       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1018 09:43:42.595066       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1018 09:43:42.595083       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1018 09:43:42.595108       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1018 09:43:42.595082       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1018 09:43:42.595142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1018 09:43:42.595149       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1018 09:43:42.595151       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1018 09:43:42.595162       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1018 09:43:42.595165       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1018 09:43:42.595172       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1018 09:43:42.595180       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1018 09:43:42.595227       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1018 09:43:42.595241       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1018 09:43:42.595224       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1018 09:43:42.595543       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1018 09:43:42.595567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1018 09:43:42.595576       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1018 09:43:42.595587       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1018 09:43:42.595586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1018 09:43:42.686154       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 18 09:43:46 old-k8s-version-619885 kubelet[719]: E1018 09:43:46.406379     719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/666ccb81-9bb0-4ee0-8fe1-8d060091f9b0-config-volume podName:666ccb81-9bb0-4ee0-8fe1-8d060091f9b0 nodeName:}" failed. No retries permitted until 2025-10-18 09:43:50.406364612 +0000 UTC m=+10.703228459 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/666ccb81-9bb0-4ee0-8fe1-8d060091f9b0-config-volume") pod "coredns-5dd5756b68-wklp4" (UID: "666ccb81-9bb0-4ee0-8fe1-8d060091f9b0") : object "kube-system"/"coredns" not registered
	Oct 18 09:43:46 old-k8s-version-619885 kubelet[719]: E1018 09:43:46.506785     719 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Oct 18 09:43:46 old-k8s-version-619885 kubelet[719]: E1018 09:43:46.506855     719 projected.go:198] Error preparing data for projected volume kube-api-access-55xz5 for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Oct 18 09:43:46 old-k8s-version-619885 kubelet[719]: E1018 09:43:46.506932     719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2e50d21c-d2e2-4cc7-b111-04c19153fc41-kube-api-access-55xz5 podName:2e50d21c-d2e2-4cc7-b111-04c19153fc41 nodeName:}" failed. No retries permitted until 2025-10-18 09:43:50.506910652 +0000 UTC m=+10.803774504 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-55xz5" (UniqueName: "kubernetes.io/projected/2e50d21c-d2e2-4cc7-b111-04c19153fc41-kube-api-access-55xz5") pod "busybox" (UID: "2e50d21c-d2e2-4cc7-b111-04c19153fc41") : object "default"/"kube-root-ca.crt" not registered
	Oct 18 09:43:55 old-k8s-version-619885 kubelet[719]: I1018 09:43:55.676760     719 topology_manager.go:215] "Topology Admit Handler" podUID="7390a37b-b66c-4dbe-85de-5ba96c9a7f24" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-88pgw"
	Oct 18 09:43:55 old-k8s-version-619885 kubelet[719]: I1018 09:43:55.681503     719 topology_manager.go:215] "Topology Admit Handler" podUID="b01b0763-878c-4706-a4ce-1b579eac767d" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-fm56d"
	Oct 18 09:43:55 old-k8s-version-619885 kubelet[719]: I1018 09:43:55.760271     719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7390a37b-b66c-4dbe-85de-5ba96c9a7f24-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-88pgw\" (UID: \"7390a37b-b66c-4dbe-85de-5ba96c9a7f24\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-88pgw"
	Oct 18 09:43:55 old-k8s-version-619885 kubelet[719]: I1018 09:43:55.760343     719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r7hn\" (UniqueName: \"kubernetes.io/projected/b01b0763-878c-4706-a4ce-1b579eac767d-kube-api-access-5r7hn\") pod \"dashboard-metrics-scraper-5f989dc9cf-fm56d\" (UID: \"b01b0763-878c-4706-a4ce-1b579eac767d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d"
	Oct 18 09:43:55 old-k8s-version-619885 kubelet[719]: I1018 09:43:55.760775     719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcqhq\" (UniqueName: \"kubernetes.io/projected/7390a37b-b66c-4dbe-85de-5ba96c9a7f24-kube-api-access-rcqhq\") pod \"kubernetes-dashboard-8694d4445c-88pgw\" (UID: \"7390a37b-b66c-4dbe-85de-5ba96c9a7f24\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-88pgw"
	Oct 18 09:43:55 old-k8s-version-619885 kubelet[719]: I1018 09:43:55.760873     719 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/b01b0763-878c-4706-a4ce-1b579eac767d-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-fm56d\" (UID: \"b01b0763-878c-4706-a4ce-1b579eac767d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d"
	Oct 18 09:44:02 old-k8s-version-619885 kubelet[719]: I1018 09:44:02.865075     719 scope.go:117] "RemoveContainer" containerID="e76b86527972d2fcd5547561249cd9984ef49dee338f65322663e5ffad7acafc"
	Oct 18 09:44:02 old-k8s-version-619885 kubelet[719]: I1018 09:44:02.876075     719 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-88pgw" podStartSLOduration=3.855431169 podCreationTimestamp="2025-10-18 09:43:55 +0000 UTC" firstStartedPulling="2025-10-18 09:43:56.00286729 +0000 UTC m=+16.299731127" lastFinishedPulling="2025-10-18 09:44:00.023454168 +0000 UTC m=+20.320318015" observedRunningTime="2025-10-18 09:44:00.876183203 +0000 UTC m=+21.173047085" watchObservedRunningTime="2025-10-18 09:44:02.876018057 +0000 UTC m=+23.172881912"
	Oct 18 09:44:03 old-k8s-version-619885 kubelet[719]: I1018 09:44:03.870479     719 scope.go:117] "RemoveContainer" containerID="e76b86527972d2fcd5547561249cd9984ef49dee338f65322663e5ffad7acafc"
	Oct 18 09:44:03 old-k8s-version-619885 kubelet[719]: I1018 09:44:03.870687     719 scope.go:117] "RemoveContainer" containerID="23f28b00046885ed3722a8258f6cd92c80978d95a4a0c82c1094e9b69cf27e37"
	Oct 18 09:44:03 old-k8s-version-619885 kubelet[719]: E1018 09:44:03.871062     719 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fm56d_kubernetes-dashboard(b01b0763-878c-4706-a4ce-1b579eac767d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d" podUID="b01b0763-878c-4706-a4ce-1b579eac767d"
	Oct 18 09:44:04 old-k8s-version-619885 kubelet[719]: I1018 09:44:04.875354     719 scope.go:117] "RemoveContainer" containerID="23f28b00046885ed3722a8258f6cd92c80978d95a4a0c82c1094e9b69cf27e37"
	Oct 18 09:44:04 old-k8s-version-619885 kubelet[719]: E1018 09:44:04.875744     719 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fm56d_kubernetes-dashboard(b01b0763-878c-4706-a4ce-1b579eac767d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d" podUID="b01b0763-878c-4706-a4ce-1b579eac767d"
	Oct 18 09:44:05 old-k8s-version-619885 kubelet[719]: I1018 09:44:05.983233     719 scope.go:117] "RemoveContainer" containerID="23f28b00046885ed3722a8258f6cd92c80978d95a4a0c82c1094e9b69cf27e37"
	Oct 18 09:44:05 old-k8s-version-619885 kubelet[719]: E1018 09:44:05.983591     719 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-fm56d_kubernetes-dashboard(b01b0763-878c-4706-a4ce-1b579eac767d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-fm56d" podUID="b01b0763-878c-4706-a4ce-1b579eac767d"
	Oct 18 09:44:13 old-k8s-version-619885 kubelet[719]: I1018 09:44:13.895325     719 scope.go:117] "RemoveContainer" containerID="868ad4152848f6a63e0415e5c6a4814b9c54f45ecbc00e5d458c6b9cedd73597"
	Oct 18 09:44:14 old-k8s-version-619885 kubelet[719]: I1018 09:44:14.621227     719 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 18 09:44:14 old-k8s-version-619885 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:44:14 old-k8s-version-619885 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:44:14 old-k8s-version-619885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:44:14 old-k8s-version-619885 systemd[1]: kubelet.service: Consumed 1.205s CPU time.
	
	
	==> kubernetes-dashboard [2d6a72283c35fffb748de47518ddeea3904e292dbab05a98cbc4f1cc59c4ba64] <==
	2025/10/18 09:44:00 Starting overwatch
	2025/10/18 09:44:00 Using namespace: kubernetes-dashboard
	2025/10/18 09:44:00 Using in-cluster config to connect to apiserver
	2025/10/18 09:44:00 Using secret token for csrf signing
	2025/10/18 09:44:00 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:44:00 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:44:00 Successful initial request to the apiserver, version: v1.28.0
	2025/10/18 09:44:00 Generating JWE encryption key
	2025/10/18 09:44:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:44:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:44:00 Initializing JWE encryption key from synchronized object
	2025/10/18 09:44:00 Creating in-cluster Sidecar client
	2025/10/18 09:44:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:44:00 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [5b9a25d7ca89e5a5f227c89e4c65fda1f57fea58ab2f00baf53866e09b9a19ea] <==
	I1018 09:44:13.943233       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:44:13.951919       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:44:13.951970       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [868ad4152848f6a63e0415e5c6a4814b9c54f45ecbc00e5d458c6b9cedd73597] <==
	I1018 09:43:43.159793       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:44:13.163097       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-619885 -n old-k8s-version-619885
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-619885 -n old-k8s-version-619885: exit status 2 (307.292587ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-619885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-589869 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-589869 --alsologtostderr -v=1: exit status 80 (2.213004026s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-589869 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:44:40.834713  376690 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:44:40.835042  376690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:44:40.835053  376690 out.go:374] Setting ErrFile to fd 2...
	I1018 09:44:40.835060  376690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:44:40.835397  376690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:44:40.835698  376690 out.go:368] Setting JSON to false
	I1018 09:44:40.835755  376690 mustload.go:65] Loading cluster: no-preload-589869
	I1018 09:44:40.836282  376690 config.go:182] Loaded profile config "no-preload-589869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:44:40.836873  376690 cli_runner.go:164] Run: docker container inspect no-preload-589869 --format={{.State.Status}}
	I1018 09:44:40.859001  376690 host.go:66] Checking if "no-preload-589869" exists ...
	I1018 09:44:40.859402  376690 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:44:40.932407  376690 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-18 09:44:40.920655242 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:44:40.933353  376690 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-589869 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:44:40.935475  376690 out.go:179] * Pausing node no-preload-589869 ... 
	I1018 09:44:40.936781  376690 host.go:66] Checking if "no-preload-589869" exists ...
	I1018 09:44:40.937152  376690 ssh_runner.go:195] Run: systemctl --version
	I1018 09:44:40.937240  376690 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-589869
	I1018 09:44:40.958549  376690 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33196 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/no-preload-589869/id_rsa Username:docker}
	I1018 09:44:41.056074  376690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:44:41.069025  376690 pause.go:52] kubelet running: true
	I1018 09:44:41.069111  376690 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:44:41.242767  376690 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:44:41.242875  376690 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:44:41.347439  376690 cri.go:89] found id: "058fe5ecd4e4b4f0d36852f542051c7ed0d450b328689d99161e077efe1a7adc"
	I1018 09:44:41.347524  376690 cri.go:89] found id: "1a10a488ac76179f6a9ca2e828262111d75fcf676bda59f5aaf0c6f715a6e6c1"
	I1018 09:44:41.347531  376690 cri.go:89] found id: "5c7847fab0c84055a61d76d77e3f30eda6e50b2bf7320ca686c85044d8e30921"
	I1018 09:44:41.347536  376690 cri.go:89] found id: "6776f5211a0e843c931b1ce36383a5f28d8bde46797fd60263b1ece94b78cabc"
	I1018 09:44:41.347548  376690 cri.go:89] found id: "f16f92d94527f39749d0ce08e163418380fcaf097f1715e466a624f2a016601a"
	I1018 09:44:41.347552  376690 cri.go:89] found id: "8ea25fde146e8e96a504e7f7eaa6b8b321ddf9a82560cd19db582cffc49f48b2"
	I1018 09:44:41.347556  376690 cri.go:89] found id: "e90a7d734d6758358dd228647088e66ec6aa6cfda7ad58d83ee3410a00ea8756"
	I1018 09:44:41.347560  376690 cri.go:89] found id: "3021ebf25ee25e7930a9131ff2cdf54e1638a03d0f603c0186e607f5bf6ea827"
	I1018 09:44:41.347563  376690 cri.go:89] found id: "365f44dae4ed2d0a509b24fe9019127a3b886f7165b236fedf613caabda9e161"
	I1018 09:44:41.347583  376690 cri.go:89] found id: "396745a65f0a79c24e427c79e33348ceee5c447ada1cb126684dc3244b1126c1"
	I1018 09:44:41.347587  376690 cri.go:89] found id: "147f4581c55b56755c7f6628078a265f0b5089ea5e8a4bc9c6409a719020f372"
	I1018 09:44:41.347591  376690 cri.go:89] found id: ""
	I1018 09:44:41.347644  376690 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:44:41.361071  376690 retry.go:31] will retry after 132.187952ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:44:41Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:44:41.494467  376690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:44:41.513382  376690 pause.go:52] kubelet running: false
	I1018 09:44:41.513447  376690 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:44:41.699201  376690 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:44:41.699285  376690 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:44:41.769878  376690 cri.go:89] found id: "058fe5ecd4e4b4f0d36852f542051c7ed0d450b328689d99161e077efe1a7adc"
	I1018 09:44:41.769901  376690 cri.go:89] found id: "1a10a488ac76179f6a9ca2e828262111d75fcf676bda59f5aaf0c6f715a6e6c1"
	I1018 09:44:41.769904  376690 cri.go:89] found id: "5c7847fab0c84055a61d76d77e3f30eda6e50b2bf7320ca686c85044d8e30921"
	I1018 09:44:41.769907  376690 cri.go:89] found id: "6776f5211a0e843c931b1ce36383a5f28d8bde46797fd60263b1ece94b78cabc"
	I1018 09:44:41.769910  376690 cri.go:89] found id: "f16f92d94527f39749d0ce08e163418380fcaf097f1715e466a624f2a016601a"
	I1018 09:44:41.769913  376690 cri.go:89] found id: "8ea25fde146e8e96a504e7f7eaa6b8b321ddf9a82560cd19db582cffc49f48b2"
	I1018 09:44:41.769915  376690 cri.go:89] found id: "e90a7d734d6758358dd228647088e66ec6aa6cfda7ad58d83ee3410a00ea8756"
	I1018 09:44:41.769918  376690 cri.go:89] found id: "3021ebf25ee25e7930a9131ff2cdf54e1638a03d0f603c0186e607f5bf6ea827"
	I1018 09:44:41.769920  376690 cri.go:89] found id: "365f44dae4ed2d0a509b24fe9019127a3b886f7165b236fedf613caabda9e161"
	I1018 09:44:41.769958  376690 cri.go:89] found id: "396745a65f0a79c24e427c79e33348ceee5c447ada1cb126684dc3244b1126c1"
	I1018 09:44:41.769966  376690 cri.go:89] found id: "147f4581c55b56755c7f6628078a265f0b5089ea5e8a4bc9c6409a719020f372"
	I1018 09:44:41.769969  376690 cri.go:89] found id: ""
	I1018 09:44:41.770005  376690 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:44:41.782104  376690 retry.go:31] will retry after 231.610644ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:44:41Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:44:42.014625  376690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:44:42.027982  376690 pause.go:52] kubelet running: false
	I1018 09:44:42.028048  376690 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:44:42.174376  376690 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:44:42.174516  376690 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:44:42.244543  376690 cri.go:89] found id: "058fe5ecd4e4b4f0d36852f542051c7ed0d450b328689d99161e077efe1a7adc"
	I1018 09:44:42.244567  376690 cri.go:89] found id: "1a10a488ac76179f6a9ca2e828262111d75fcf676bda59f5aaf0c6f715a6e6c1"
	I1018 09:44:42.244573  376690 cri.go:89] found id: "5c7847fab0c84055a61d76d77e3f30eda6e50b2bf7320ca686c85044d8e30921"
	I1018 09:44:42.244578  376690 cri.go:89] found id: "6776f5211a0e843c931b1ce36383a5f28d8bde46797fd60263b1ece94b78cabc"
	I1018 09:44:42.244581  376690 cri.go:89] found id: "f16f92d94527f39749d0ce08e163418380fcaf097f1715e466a624f2a016601a"
	I1018 09:44:42.244586  376690 cri.go:89] found id: "8ea25fde146e8e96a504e7f7eaa6b8b321ddf9a82560cd19db582cffc49f48b2"
	I1018 09:44:42.244596  376690 cri.go:89] found id: "e90a7d734d6758358dd228647088e66ec6aa6cfda7ad58d83ee3410a00ea8756"
	I1018 09:44:42.244600  376690 cri.go:89] found id: "3021ebf25ee25e7930a9131ff2cdf54e1638a03d0f603c0186e607f5bf6ea827"
	I1018 09:44:42.244604  376690 cri.go:89] found id: "365f44dae4ed2d0a509b24fe9019127a3b886f7165b236fedf613caabda9e161"
	I1018 09:44:42.244621  376690 cri.go:89] found id: "396745a65f0a79c24e427c79e33348ceee5c447ada1cb126684dc3244b1126c1"
	I1018 09:44:42.244626  376690 cri.go:89] found id: "147f4581c55b56755c7f6628078a265f0b5089ea5e8a4bc9c6409a719020f372"
	I1018 09:44:42.244630  376690 cri.go:89] found id: ""
	I1018 09:44:42.244674  376690 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:44:42.256515  376690 retry.go:31] will retry after 392.618707ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:44:42Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:44:42.650026  376690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:44:42.667618  376690 pause.go:52] kubelet running: false
	I1018 09:44:42.667697  376690 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:44:42.872323  376690 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:44:42.872410  376690 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:44:42.959117  376690 cri.go:89] found id: "058fe5ecd4e4b4f0d36852f542051c7ed0d450b328689d99161e077efe1a7adc"
	I1018 09:44:42.959141  376690 cri.go:89] found id: "1a10a488ac76179f6a9ca2e828262111d75fcf676bda59f5aaf0c6f715a6e6c1"
	I1018 09:44:42.959147  376690 cri.go:89] found id: "5c7847fab0c84055a61d76d77e3f30eda6e50b2bf7320ca686c85044d8e30921"
	I1018 09:44:42.959152  376690 cri.go:89] found id: "6776f5211a0e843c931b1ce36383a5f28d8bde46797fd60263b1ece94b78cabc"
	I1018 09:44:42.959156  376690 cri.go:89] found id: "f16f92d94527f39749d0ce08e163418380fcaf097f1715e466a624f2a016601a"
	I1018 09:44:42.959161  376690 cri.go:89] found id: "8ea25fde146e8e96a504e7f7eaa6b8b321ddf9a82560cd19db582cffc49f48b2"
	I1018 09:44:42.959165  376690 cri.go:89] found id: "e90a7d734d6758358dd228647088e66ec6aa6cfda7ad58d83ee3410a00ea8756"
	I1018 09:44:42.959169  376690 cri.go:89] found id: "3021ebf25ee25e7930a9131ff2cdf54e1638a03d0f603c0186e607f5bf6ea827"
	I1018 09:44:42.959172  376690 cri.go:89] found id: "365f44dae4ed2d0a509b24fe9019127a3b886f7165b236fedf613caabda9e161"
	I1018 09:44:42.959178  376690 cri.go:89] found id: "396745a65f0a79c24e427c79e33348ceee5c447ada1cb126684dc3244b1126c1"
	I1018 09:44:42.959181  376690 cri.go:89] found id: "147f4581c55b56755c7f6628078a265f0b5089ea5e8a4bc9c6409a719020f372"
	I1018 09:44:42.959184  376690 cri.go:89] found id: ""
	I1018 09:44:42.959218  376690 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:44:42.977961  376690 out.go:203] 
	W1018 09:44:42.979190  376690 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:44:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:44:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:44:42.979208  376690 out.go:285] * 
	* 
	W1018 09:44:42.985268  376690 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:44:42.986557  376690 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-589869 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-589869
helpers_test.go:243: (dbg) docker inspect no-preload-589869:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58",
	        "Created": "2025-10-18T09:42:25.517759152Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 367117,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:43:46.112117555Z",
	            "FinishedAt": "2025-10-18T09:43:45.318110063Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58/hostname",
	        "HostsPath": "/var/lib/docker/containers/0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58/hosts",
	        "LogPath": "/var/lib/docker/containers/0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58/0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58-json.log",
	        "Name": "/no-preload-589869",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-589869:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-589869",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58",
	                "LowerDir": "/var/lib/docker/overlay2/948c69adddfab1e426f033b76756774629876cf0b876605e1a44fff7c658ed4c-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/948c69adddfab1e426f033b76756774629876cf0b876605e1a44fff7c658ed4c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/948c69adddfab1e426f033b76756774629876cf0b876605e1a44fff7c658ed4c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/948c69adddfab1e426f033b76756774629876cf0b876605e1a44fff7c658ed4c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-589869",
	                "Source": "/var/lib/docker/volumes/no-preload-589869/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-589869",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-589869",
	                "name.minikube.sigs.k8s.io": "no-preload-589869",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e548c2ab67d6c14138d9a41050cb5e0560402efc66d6da03bc88304c9bc5a62e",
	            "SandboxKey": "/var/run/docker/netns/e548c2ab67d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33196"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33197"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33200"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33198"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33199"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-589869": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:5d:eb:c9:a4:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b43a4d9c76b9aa8730370c98575c8c91fc6813136b487c412c5288120a5a3e49",
	                    "EndpointID": "ed49f00cd706d41a0099e5527d3b4a68e014de187029193fc1a1b1573f39744c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-589869",
	                        "0eccfe69a507"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589869 -n no-preload-589869
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589869 -n no-preload-589869: exit status 2 (339.690183ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-589869 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-589869 logs -n 25: (1.234715543s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:42 UTC │
	│ ssh     │ cert-options-310417 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ ssh     │ -p cert-options-310417 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ delete  │ -p cert-options-310417                                                                                                                                                                                                                        │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ stop    │ -p kubernetes-upgrade-919613                                                                                                                                                                                                                  │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ start   │ -p old-k8s-version-619885 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │                     │
	│ delete  │ -p missing-upgrade-631894                                                                                                                                                                                                                     │ missing-upgrade-631894    │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ start   │ -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-619885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	│ stop    │ -p old-k8s-version-619885 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-589869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	│ stop    │ -p no-preload-589869 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-619885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p old-k8s-version-619885 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:44 UTC │
	│ addons  │ enable dashboard -p no-preload-589869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:44 UTC │
	│ image   │ old-k8s-version-619885 image list --format=json                                                                                                                                                                                               │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ pause   │ -p old-k8s-version-619885 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ delete  │ -p old-k8s-version-619885                                                                                                                                                                                                                     │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p old-k8s-version-619885                                                                                                                                                                                                                     │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ start   │ -p embed-certs-055175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-055175        │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ image   │ no-preload-589869 image list --format=json                                                                                                                                                                                                    │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ pause   │ -p no-preload-589869 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ start   │ -p cert-expiration-650496 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-650496    │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:44:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:44:41.368864  376816 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:44:41.369231  376816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:44:41.369239  376816 out.go:374] Setting ErrFile to fd 2...
	I1018 09:44:41.369248  376816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:44:41.369587  376816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:44:41.370168  376816 out.go:368] Setting JSON to false
	I1018 09:44:41.371839  376816 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5225,"bootTime":1760775456,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:44:41.371955  376816 start.go:141] virtualization: kvm guest
	I1018 09:44:41.374034  376816 out.go:179] * [cert-expiration-650496] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:44:41.375316  376816 notify.go:220] Checking for updates...
	I1018 09:44:41.375395  376816 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:44:41.376818  376816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:44:41.378086  376816 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:44:41.379209  376816 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:44:41.380348  376816 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:44:41.381486  376816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:44:41.383224  376816 config.go:182] Loaded profile config "cert-expiration-650496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:44:41.383921  376816 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:44:41.425604  376816 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:44:41.425712  376816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:44:41.502594  376816 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-18 09:44:41.489372736 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:44:41.502749  376816 docker.go:318] overlay module found
	I1018 09:44:41.507275  376816 out.go:179] * Using the docker driver based on existing profile
	I1018 09:44:41.508662  376816 start.go:305] selected driver: docker
	I1018 09:44:41.508673  376816 start.go:925] validating driver "docker" against &{Name:cert-expiration-650496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-650496 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:44:41.508779  376816 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:44:41.510529  376816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:44:41.600978  376816 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-18 09:44:41.587377819 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:44:41.601513  376816 cni.go:84] Creating CNI manager for ""
	I1018 09:44:41.601607  376816 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:44:41.601671  376816 start.go:349] cluster config:
	{Name:cert-expiration-650496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-650496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1018 09:44:41.603438  376816 out.go:179] * Starting "cert-expiration-650496" primary control-plane node in "cert-expiration-650496" cluster
	I1018 09:44:41.604782  376816 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:44:41.605948  376816 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:44:41.607004  376816 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:44:41.607043  376816 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:44:41.607058  376816 cache.go:58] Caching tarball of preloaded images
	I1018 09:44:41.607135  376816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:44:41.607150  376816 preload.go:233] Found /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:44:41.607157  376816 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:44:41.607251  376816 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/config.json ...
	I1018 09:44:41.629873  376816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:44:41.629888  376816 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:44:41.629907  376816 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:44:41.629935  376816 start.go:360] acquireMachinesLock for cert-expiration-650496: {Name:mkff120a47d272c6a75e24f68f43639d7a715083 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:44:41.629987  376816 start.go:364] duration metric: took 35.6µs to acquireMachinesLock for "cert-expiration-650496"
	I1018 09:44:41.630000  376816 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:44:41.630003  376816 fix.go:54] fixHost starting: 
	I1018 09:44:41.630201  376816 cli_runner.go:164] Run: docker container inspect cert-expiration-650496 --format={{.State.Status}}
	I1018 09:44:41.648262  376816 fix.go:112] recreateIfNeeded on cert-expiration-650496: state=Running err=<nil>
	W1018 09:44:41.648291  376816 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:44:38.885934  373771 out.go:252]   - Booting up control plane ...
	I1018 09:44:38.886061  373771 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:44:38.886170  373771 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:44:38.886744  373771 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:44:38.901424  373771 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:44:38.901563  373771 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:44:38.908059  373771 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:44:38.908287  373771 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:44:38.908350  373771 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:44:39.007287  373771 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:44:39.007416  373771 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:44:39.509027  373771 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.911514ms
	I1018 09:44:39.512885  373771 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:44:39.513026  373771 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1018 09:44:39.513163  373771 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:44:39.513305  373771 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:44:41.467696  373771 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.953869176s
	I1018 09:44:41.587501  373771 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.074310907s
	
	
	==> CRI-O <==
	Oct 18 09:44:06 no-preload-589869 crio[566]: time="2025-10-18T09:44:06.106192883Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:44:06 no-preload-589869 crio[566]: time="2025-10-18T09:44:06.109551471Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:44:06 no-preload-589869 crio[566]: time="2025-10-18T09:44:06.109573284Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.347465993Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cb67ab31-b64f-4827-8025-3d6870bba1d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.348511559Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8f0b0a29-c178-4448-8275-f8fa71fbe7b0 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.349588169Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm/dashboard-metrics-scraper" id=fa9a6802-1d87-4eef-8de9-631bcd68140e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.34989974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.355485904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.356060496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.395853131Z" level=info msg="Created container 396745a65f0a79c24e427c79e33348ceee5c447ada1cb126684dc3244b1126c1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm/dashboard-metrics-scraper" id=fa9a6802-1d87-4eef-8de9-631bcd68140e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.396474991Z" level=info msg="Starting container: 396745a65f0a79c24e427c79e33348ceee5c447ada1cb126684dc3244b1126c1" id=c9036589-d4b8-4320-8a1f-1ccff4406e8a name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.398385638Z" level=info msg="Started container" PID=1741 containerID=396745a65f0a79c24e427c79e33348ceee5c447ada1cb126684dc3244b1126c1 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm/dashboard-metrics-scraper id=c9036589-d4b8-4320-8a1f-1ccff4406e8a name=/runtime.v1.RuntimeService/StartContainer sandboxID=c8dd809b3821458bbc103ca3e998df5896396f25425b99344ab31a2c8b4fcbf1
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.458904633Z" level=info msg="Removing container: 0d2b5b73f184eb954995efc8b2c9520141cc4f7d1e35f0438a63e88ce5832e20" id=ee0efc63-8d14-4255-8404-d48857677229 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.470471869Z" level=info msg="Removed container 0d2b5b73f184eb954995efc8b2c9520141cc4f7d1e35f0438a63e88ce5832e20: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm/dashboard-metrics-scraper" id=ee0efc63-8d14-4255-8404-d48857677229 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.463362679Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=580b6898-19e6-4b67-81ac-205a79b7cfaa name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.492075394Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a11d06e4-ab21-4929-b810-4c183901023f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.552216492Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2cab525b-17b4-4635-b03a-25cfb1f0b505 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.552493283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.63690268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.637116859Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0bf37c4841aa483945b7a02ef5f9b25bd89d94184a48ae5a170c17f1b33c9be9/merged/etc/passwd: no such file or directory"
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.637145117Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0bf37c4841aa483945b7a02ef5f9b25bd89d94184a48ae5a170c17f1b33c9be9/merged/etc/group: no such file or directory"
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.637431983Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.714162996Z" level=info msg="Created container 058fe5ecd4e4b4f0d36852f542051c7ed0d450b328689d99161e077efe1a7adc: kube-system/storage-provisioner/storage-provisioner" id=2cab525b-17b4-4635-b03a-25cfb1f0b505 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.714930849Z" level=info msg="Starting container: 058fe5ecd4e4b4f0d36852f542051c7ed0d450b328689d99161e077efe1a7adc" id=88173215-e198-4620-b634-f4f9dc33e1d0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.717249386Z" level=info msg="Started container" PID=1755 containerID=058fe5ecd4e4b4f0d36852f542051c7ed0d450b328689d99161e077efe1a7adc description=kube-system/storage-provisioner/storage-provisioner id=88173215-e198-4620-b634-f4f9dc33e1d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=815a103561fddbda0a2ceb9c79a986bfdfecc6cc53a97284c1ef0c14d44e8dc7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	058fe5ecd4e4b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   815a103561fdd       storage-provisioner                          kube-system
	396745a65f0a7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           18 seconds ago      Exited              dashboard-metrics-scraper   2                   c8dd809b38214       dashboard-metrics-scraper-6ffb444bf9-wtprm   kubernetes-dashboard
	147f4581c55b5       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   3a646b16cdff5       kubernetes-dashboard-855c9754f9-cckhv        kubernetes-dashboard
	1a10a488ac761       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   a72b6301d8a91       coredns-66bc5c9577-pck54                     kube-system
	376d9ae981623       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   a6e5ce289feda       busybox                                      default
	5c7847fab0c84       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   815a103561fdd       storage-provisioner                          kube-system
	6776f5211a0e8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   c2a136ae99ecb       kindnet-zjqmf                                kube-system
	f16f92d94527f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           48 seconds ago      Running             kube-proxy                  0                   a6d523f23a5a1       kube-proxy-45kpn                             kube-system
	8ea25fde146e8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   17ab159cf8b9a       kube-controller-manager-no-preload-589869    kube-system
	e90a7d734d675       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   5d235957bf4a0       etcd-no-preload-589869                       kube-system
	3021ebf25ee25       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   0f4d7162119df       kube-scheduler-no-preload-589869             kube-system
	365f44dae4ed2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   059984b318bbc       kube-apiserver-no-preload-589869             kube-system
	
	
	==> coredns [1a10a488ac76179f6a9ca2e828262111d75fcf676bda59f5aaf0c6f715a6e6c1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54635 - 23773 "HINFO IN 7698436634749166641.1414637754520399092. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022413161s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-589869
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-589869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=no-preload-589869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_42_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:42:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-589869
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:44:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:44:25 +0000   Sat, 18 Oct 2025 09:42:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:44:25 +0000   Sat, 18 Oct 2025 09:42:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:44:25 +0000   Sat, 18 Oct 2025 09:42:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:44:25 +0000   Sat, 18 Oct 2025 09:43:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-589869
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                6a71982a-ecb5-4a3a-b089-e736cb5f928f
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-pck54                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-no-preload-589869                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-zjqmf                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-no-preload-589869              250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-no-preload-589869     200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-45kpn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-no-preload-589869              100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wtprm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-cckhv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  NodeHasSufficientMemory  109s               kubelet          Node no-preload-589869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s               kubelet          Node no-preload-589869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s               kubelet          Node no-preload-589869 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node no-preload-589869 event: Registered Node no-preload-589869 in Controller
	  Normal  NodeReady                90s                kubelet          Node no-preload-589869 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node no-preload-589869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node no-preload-589869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node no-preload-589869 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node no-preload-589869 event: Registered Node no-preload-589869 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [e90a7d734d6758358dd228647088e66ec6aa6cfda7ad58d83ee3410a00ea8756] <==
	{"level":"warn","ts":"2025-10-18T09:43:54.475511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:43:54.494997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:43:54.498263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:43:54.505329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:43:54.512375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:43:54.570183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34412","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:44:26.635059Z","caller":"traceutil/trace.go:172","msg":"trace[1436335138] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"164.905871ms","start":"2025-10-18T09:44:26.470133Z","end":"2025-10-18T09:44:26.635039Z","steps":["trace[1436335138] 'process raft request'  (duration: 164.784597ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:44:26.847799Z","caller":"traceutil/trace.go:172","msg":"trace[357102648] linearizableReadLoop","detail":"{readStateIndex:652; appliedIndex:652; }","duration":"106.711546ms","start":"2025-10-18T09:44:26.741062Z","end":"2025-10-18T09:44:26.847773Z","steps":["trace[357102648] 'read index received'  (duration: 106.704025ms)","trace[357102648] 'applied index is now lower than readState.Index'  (duration: 6.524µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:44:26.862698Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.614971ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-10-18T09:44:26.862808Z","caller":"traceutil/trace.go:172","msg":"trace[1445016313] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:618; }","duration":"121.724952ms","start":"2025-10-18T09:44:26.741051Z","end":"2025-10-18T09:44:26.862776Z","steps":["trace[1445016313] 'agreement among raft nodes before linearized reading'  (duration: 106.791268ms)","trace[1445016313] 'range keys from in-memory index tree'  (duration: 14.735602ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:44:26.862871Z","caller":"traceutil/trace.go:172","msg":"trace[2128761252] transaction","detail":"{read_only:false; response_revision:619; number_of_response:1; }","duration":"144.271421ms","start":"2025-10-18T09:44:26.718515Z","end":"2025-10-18T09:44:26.862787Z","steps":["trace[2128761252] 'process raft request'  (duration: 129.287013ms)","trace[2128761252] 'compare'  (duration: 14.875115ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:44:27.583396Z","caller":"traceutil/trace.go:172","msg":"trace[168689661] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"200.671861ms","start":"2025-10-18T09:44:27.382709Z","end":"2025-10-18T09:44:27.583380Z","steps":["trace[168689661] 'process raft request'  (duration: 200.550824ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:44:27.713330Z","caller":"traceutil/trace.go:172","msg":"trace[149135860] linearizableReadLoop","detail":"{readStateIndex:655; appliedIndex:655; }","duration":"109.953631ms","start":"2025-10-18T09:44:27.603349Z","end":"2025-10-18T09:44:27.713302Z","steps":["trace[149135860] 'read index received'  (duration: 109.945154ms)","trace[149135860] 'applied index is now lower than readState.Index'  (duration: 7.335µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:44:27.722209Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.835399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-pck54\" limit:1 ","response":"range_response_count:1 size:5755"}
	{"level":"info","ts":"2025-10-18T09:44:27.722276Z","caller":"traceutil/trace.go:172","msg":"trace[1144694429] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-pck54; range_end:; response_count:1; response_revision:621; }","duration":"118.915874ms","start":"2025-10-18T09:44:27.603339Z","end":"2025-10-18T09:44:27.722255Z","steps":["trace[1144694429] 'agreement among raft nodes before linearized reading'  (duration: 110.048677ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:44:27.722289Z","caller":"traceutil/trace.go:172","msg":"trace[1317608009] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"134.544313ms","start":"2025-10-18T09:44:27.587731Z","end":"2025-10-18T09:44:27.722275Z","steps":["trace[1317608009] 'process raft request'  (duration: 125.688933ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:44:27.722281Z","caller":"traceutil/trace.go:172","msg":"trace[1191704746] transaction","detail":"{read_only:false; response_revision:623; number_of_response:1; }","duration":"134.52482ms","start":"2025-10-18T09:44:27.587742Z","end":"2025-10-18T09:44:27.722267Z","steps":["trace[1191704746] 'process raft request'  (duration: 134.486792ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:44:27.772541Z","caller":"traceutil/trace.go:172","msg":"trace[1604303708] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"183.164846ms","start":"2025-10-18T09:44:27.589356Z","end":"2025-10-18T09:44:27.772521Z","steps":["trace[1604303708] 'process raft request'  (duration: 183.038779ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:44:27.772705Z","caller":"traceutil/trace.go:172","msg":"trace[2138093964] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"180.93727ms","start":"2025-10-18T09:44:27.591751Z","end":"2025-10-18T09:44:27.772688Z","steps":["trace[2138093964] 'process raft request'  (duration: 180.740091ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:44:27.900478Z","caller":"traceutil/trace.go:172","msg":"trace[169008884] linearizableReadLoop","detail":"{readStateIndex:659; appliedIndex:659; }","duration":"119.092627ms","start":"2025-10-18T09:44:27.781363Z","end":"2025-10-18T09:44:27.900456Z","steps":["trace[169008884] 'read index received'  (duration: 119.086698ms)","trace[169008884] 'applied index is now lower than readState.Index'  (duration: 5.17µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:44:27.932712Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.328152ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-589869\" limit:1 ","response":"range_response_count:1 size:5235"}
	{"level":"info","ts":"2025-10-18T09:44:27.932757Z","caller":"traceutil/trace.go:172","msg":"trace[225141672] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"152.328374ms","start":"2025-10-18T09:44:27.780410Z","end":"2025-10-18T09:44:27.932738Z","steps":["trace[225141672] 'process raft request'  (duration: 120.110277ms)","trace[225141672] 'compare'  (duration: 32.101785ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:44:27.932815Z","caller":"traceutil/trace.go:172","msg":"trace[1201379552] range","detail":"{range_begin:/registry/minions/no-preload-589869; range_end:; response_count:1; response_revision:625; }","duration":"151.394565ms","start":"2025-10-18T09:44:27.781359Z","end":"2025-10-18T09:44:27.932754Z","steps":["trace[1201379552] 'agreement among raft nodes before linearized reading'  (duration: 119.168614ms)","trace[1201379552] 'range keys from in-memory index tree'  (duration: 32.076266ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:44:28.304854Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.427028ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-45kpn\" limit:1 ","response":"range_response_count:1 size:5043"}
	{"level":"info","ts":"2025-10-18T09:44:28.304921Z","caller":"traceutil/trace.go:172","msg":"trace[689393160] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-45kpn; range_end:; response_count:1; response_revision:626; }","duration":"100.539027ms","start":"2025-10-18T09:44:28.204368Z","end":"2025-10-18T09:44:28.304907Z","steps":["trace[689393160] 'range keys from in-memory index tree'  (duration: 100.279057ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:44:44 up  1:27,  0 user,  load average: 2.26, 2.77, 1.80
	Linux no-preload-589869 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6776f5211a0e843c931b1ce36383a5f28d8bde46797fd60263b1ece94b78cabc] <==
	I1018 09:43:55.887079       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:43:55.887318       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1018 09:43:55.887472       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:43:55.887487       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:43:55.887505       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:43:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:43:56.090482       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:43:56.090518       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:43:56.090533       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:43:56.090699       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:43:56.521157       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:43:56.521180       1 metrics.go:72] Registering metrics
	I1018 09:43:56.521244       1 controller.go:711] "Syncing nftables rules"
	I1018 09:44:06.090699       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:44:06.090751       1 main.go:301] handling current node
	I1018 09:44:16.090947       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:44:16.090977       1 main.go:301] handling current node
	I1018 09:44:26.091141       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:44:26.091177       1 main.go:301] handling current node
	I1018 09:44:36.094882       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:44:36.094909       1 main.go:301] handling current node
	
	
	==> kube-apiserver [365f44dae4ed2d0a509b24fe9019127a3b886f7165b236fedf613caabda9e161] <==
	I1018 09:43:55.045278       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:43:55.045075       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 09:43:55.045406       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:43:55.045574       1 policy_source.go:240] refreshing policies
	I1018 09:43:55.045100       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:43:55.045614       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:43:55.045620       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:43:55.045626       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:43:55.045140       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 09:43:55.045690       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:43:55.045166       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:43:55.051128       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:43:55.056798       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 09:43:55.064937       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:43:55.343898       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:43:55.365278       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:43:55.388177       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:43:55.415339       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:43:55.425061       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:43:55.475548       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.151.147"}
	I1018 09:43:55.483925       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.164.57"}
	I1018 09:43:55.947528       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:43:58.649745       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:43:58.951357       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:43:59.001417       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8ea25fde146e8e96a504e7f7eaa6b8b321ddf9a82560cd19db582cffc49f48b2] <==
	I1018 09:43:58.370512       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:43:58.394091       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 09:43:58.394574       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:43:58.395570       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:43:58.395713       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:43:58.395764       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:43:58.395782       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:43:58.396014       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:43:58.396033       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:43:58.396017       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:43:58.396017       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:43:58.396112       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:43:58.397505       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:43:58.400795       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 09:43:58.400997       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:43:58.401028       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:43:58.401036       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:43:58.401044       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:43:58.401184       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:43:58.401218       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 09:43:58.412149       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:43:58.418348       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:43:58.418367       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:43:58.418375       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:43:58.426444       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f16f92d94527f39749d0ce08e163418380fcaf097f1715e466a624f2a016601a] <==
	I1018 09:43:55.775708       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:43:55.831404       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:43:55.931854       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:43:55.931890       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1018 09:43:55.931963       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:43:55.951619       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:43:55.951672       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:43:55.957617       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:43:55.958287       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:43:55.958354       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:43:55.960954       1 config.go:200] "Starting service config controller"
	I1018 09:43:55.960974       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:43:55.961008       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:43:55.961015       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:43:55.961035       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:43:55.961040       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:43:55.961280       1 config.go:309] "Starting node config controller"
	I1018 09:43:55.961297       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:43:55.961305       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:43:56.061808       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:43:56.061845       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:43:56.061876       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3021ebf25ee25e7930a9131ff2cdf54e1638a03d0f603c0186e607f5bf6ea827] <==
	I1018 09:43:54.070344       1 serving.go:386] Generated self-signed cert in-memory
	I1018 09:43:55.012568       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:43:55.012592       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:43:55.017284       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 09:43:55.017316       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 09:43:55.017373       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:43:55.017398       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:43:55.017410       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:43:55.017467       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:43:55.017768       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:43:55.017851       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:43:55.117524       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 09:43:55.117513       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:43:55.117703       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:43:55 no-preload-589869 kubelet[712]: I1018 09:43:55.464062     712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9c851a2c-8320-45ae-9c2f-3f60bc0401c8-tmp\") pod \"storage-provisioner\" (UID: \"9c851a2c-8320-45ae-9c2f-3f60bc0401c8\") " pod="kube-system/storage-provisioner"
	Oct 18 09:43:58 no-preload-589869 kubelet[712]: I1018 09:43:58.984366     712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgdv4\" (UniqueName: \"kubernetes.io/projected/3a9478e4-6026-4abd-9276-ffd01cf7b5ff-kube-api-access-wgdv4\") pod \"dashboard-metrics-scraper-6ffb444bf9-wtprm\" (UID: \"3a9478e4-6026-4abd-9276-ffd01cf7b5ff\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm"
	Oct 18 09:43:58 no-preload-589869 kubelet[712]: I1018 09:43:58.984442     712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8f48c99e-2020-467e-951d-38d637d68c79-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-cckhv\" (UID: \"8f48c99e-2020-467e-951d-38d637d68c79\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cckhv"
	Oct 18 09:43:58 no-preload-589869 kubelet[712]: I1018 09:43:58.984476     712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3a9478e4-6026-4abd-9276-ffd01cf7b5ff-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-wtprm\" (UID: \"3a9478e4-6026-4abd-9276-ffd01cf7b5ff\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm"
	Oct 18 09:43:58 no-preload-589869 kubelet[712]: I1018 09:43:58.984501     712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnxj6\" (UniqueName: \"kubernetes.io/projected/8f48c99e-2020-467e-951d-38d637d68c79-kube-api-access-dnxj6\") pod \"kubernetes-dashboard-855c9754f9-cckhv\" (UID: \"8f48c99e-2020-467e-951d-38d637d68c79\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cckhv"
	Oct 18 09:44:02 no-preload-589869 kubelet[712]: I1018 09:44:02.393321     712 scope.go:117] "RemoveContainer" containerID="b5585fc5c98f760e5ff9575e79132ec8aeb47ce8371a44fff4fe1b14192d2fb2"
	Oct 18 09:44:03 no-preload-589869 kubelet[712]: I1018 09:44:03.397621     712 scope.go:117] "RemoveContainer" containerID="b5585fc5c98f760e5ff9575e79132ec8aeb47ce8371a44fff4fe1b14192d2fb2"
	Oct 18 09:44:03 no-preload-589869 kubelet[712]: I1018 09:44:03.397775     712 scope.go:117] "RemoveContainer" containerID="0d2b5b73f184eb954995efc8b2c9520141cc4f7d1e35f0438a63e88ce5832e20"
	Oct 18 09:44:03 no-preload-589869 kubelet[712]: E1018 09:44:03.397985     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wtprm_kubernetes-dashboard(3a9478e4-6026-4abd-9276-ffd01cf7b5ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm" podUID="3a9478e4-6026-4abd-9276-ffd01cf7b5ff"
	Oct 18 09:44:04 no-preload-589869 kubelet[712]: I1018 09:44:04.402478     712 scope.go:117] "RemoveContainer" containerID="0d2b5b73f184eb954995efc8b2c9520141cc4f7d1e35f0438a63e88ce5832e20"
	Oct 18 09:44:04 no-preload-589869 kubelet[712]: E1018 09:44:04.402693     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wtprm_kubernetes-dashboard(3a9478e4-6026-4abd-9276-ffd01cf7b5ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm" podUID="3a9478e4-6026-4abd-9276-ffd01cf7b5ff"
	Oct 18 09:44:06 no-preload-589869 kubelet[712]: I1018 09:44:06.419895     712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cckhv" podStartSLOduration=2.167720227 podStartE2EDuration="8.4198764s" podCreationTimestamp="2025-10-18 09:43:58 +0000 UTC" firstStartedPulling="2025-10-18 09:43:59.250377998 +0000 UTC m=+7.003683629" lastFinishedPulling="2025-10-18 09:44:05.502534172 +0000 UTC m=+13.255839802" observedRunningTime="2025-10-18 09:44:06.419688554 +0000 UTC m=+14.172994209" watchObservedRunningTime="2025-10-18 09:44:06.4198764 +0000 UTC m=+14.173182038"
	Oct 18 09:44:10 no-preload-589869 kubelet[712]: I1018 09:44:10.583508     712 scope.go:117] "RemoveContainer" containerID="0d2b5b73f184eb954995efc8b2c9520141cc4f7d1e35f0438a63e88ce5832e20"
	Oct 18 09:44:10 no-preload-589869 kubelet[712]: E1018 09:44:10.583697     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wtprm_kubernetes-dashboard(3a9478e4-6026-4abd-9276-ffd01cf7b5ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm" podUID="3a9478e4-6026-4abd-9276-ffd01cf7b5ff"
	Oct 18 09:44:25 no-preload-589869 kubelet[712]: I1018 09:44:25.346923     712 scope.go:117] "RemoveContainer" containerID="0d2b5b73f184eb954995efc8b2c9520141cc4f7d1e35f0438a63e88ce5832e20"
	Oct 18 09:44:25 no-preload-589869 kubelet[712]: I1018 09:44:25.456978     712 scope.go:117] "RemoveContainer" containerID="0d2b5b73f184eb954995efc8b2c9520141cc4f7d1e35f0438a63e88ce5832e20"
	Oct 18 09:44:25 no-preload-589869 kubelet[712]: I1018 09:44:25.457433     712 scope.go:117] "RemoveContainer" containerID="396745a65f0a79c24e427c79e33348ceee5c447ada1cb126684dc3244b1126c1"
	Oct 18 09:44:25 no-preload-589869 kubelet[712]: E1018 09:44:25.457661     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wtprm_kubernetes-dashboard(3a9478e4-6026-4abd-9276-ffd01cf7b5ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm" podUID="3a9478e4-6026-4abd-9276-ffd01cf7b5ff"
	Oct 18 09:44:26 no-preload-589869 kubelet[712]: I1018 09:44:26.463021     712 scope.go:117] "RemoveContainer" containerID="5c7847fab0c84055a61d76d77e3f30eda6e50b2bf7320ca686c85044d8e30921"
	Oct 18 09:44:30 no-preload-589869 kubelet[712]: I1018 09:44:30.584301     712 scope.go:117] "RemoveContainer" containerID="396745a65f0a79c24e427c79e33348ceee5c447ada1cb126684dc3244b1126c1"
	Oct 18 09:44:30 no-preload-589869 kubelet[712]: E1018 09:44:30.584482     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wtprm_kubernetes-dashboard(3a9478e4-6026-4abd-9276-ffd01cf7b5ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm" podUID="3a9478e4-6026-4abd-9276-ffd01cf7b5ff"
	Oct 18 09:44:41 no-preload-589869 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:44:41 no-preload-589869 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:44:41 no-preload-589869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:44:41 no-preload-589869 systemd[1]: kubelet.service: Consumed 1.553s CPU time.
	
	
	==> kubernetes-dashboard [147f4581c55b56755c7f6628078a265f0b5089ea5e8a4bc9c6409a719020f372] <==
	2025/10/18 09:44:05 Using namespace: kubernetes-dashboard
	2025/10/18 09:44:05 Using in-cluster config to connect to apiserver
	2025/10/18 09:44:05 Using secret token for csrf signing
	2025/10/18 09:44:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:44:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:44:05 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:44:05 Generating JWE encryption key
	2025/10/18 09:44:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:44:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:44:05 Initializing JWE encryption key from synchronized object
	2025/10/18 09:44:05 Creating in-cluster Sidecar client
	2025/10/18 09:44:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:44:05 Serving insecurely on HTTP port: 9090
	2025/10/18 09:44:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:44:05 Starting overwatch
	
	
	==> storage-provisioner [058fe5ecd4e4b4f0d36852f542051c7ed0d450b328689d99161e077efe1a7adc] <==
	I1018 09:44:26.731637       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:44:26.739717       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:44:26.739765       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:44:26.863993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:44:30.318762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:44:34.578767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:44:38.177863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:44:41.231476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:44:44.254666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:44:44.259582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:44:44.259767       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:44:44.259940       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-589869_265f0382-a280-4db1-8d6b-41ec87cf068e!
	I1018 09:44:44.259998       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ffa4ca64-af5f-429e-8808-12f7378aafdf", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-589869_265f0382-a280-4db1-8d6b-41ec87cf068e became leader
	W1018 09:44:44.261684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:44:44.266729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:44:44.360171       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-589869_265f0382-a280-4db1-8d6b-41ec87cf068e!
	
	
	==> storage-provisioner [5c7847fab0c84055a61d76d77e3f30eda6e50b2bf7320ca686c85044d8e30921] <==
	I1018 09:43:55.742572       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:44:25.747303       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-589869 -n no-preload-589869
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-589869 -n no-preload-589869: exit status 2 (331.980067ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-589869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-589869
helpers_test.go:243: (dbg) docker inspect no-preload-589869:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58",
	        "Created": "2025-10-18T09:42:25.517759152Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 367117,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:43:46.112117555Z",
	            "FinishedAt": "2025-10-18T09:43:45.318110063Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58/hostname",
	        "HostsPath": "/var/lib/docker/containers/0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58/hosts",
	        "LogPath": "/var/lib/docker/containers/0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58/0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58-json.log",
	        "Name": "/no-preload-589869",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-589869:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-589869",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0eccfe69a50731d33a91a10da234ec08d5ce83e88193a0dd53eb394336b5da58",
	                "LowerDir": "/var/lib/docker/overlay2/948c69adddfab1e426f033b76756774629876cf0b876605e1a44fff7c658ed4c-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/948c69adddfab1e426f033b76756774629876cf0b876605e1a44fff7c658ed4c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/948c69adddfab1e426f033b76756774629876cf0b876605e1a44fff7c658ed4c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/948c69adddfab1e426f033b76756774629876cf0b876605e1a44fff7c658ed4c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-589869",
	                "Source": "/var/lib/docker/volumes/no-preload-589869/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-589869",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-589869",
	                "name.minikube.sigs.k8s.io": "no-preload-589869",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e548c2ab67d6c14138d9a41050cb5e0560402efc66d6da03bc88304c9bc5a62e",
	            "SandboxKey": "/var/run/docker/netns/e548c2ab67d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33196"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33197"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33200"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33198"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33199"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-589869": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:5d:eb:c9:a4:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b43a4d9c76b9aa8730370c98575c8c91fc6813136b487c412c5288120a5a3e49",
	                    "EndpointID": "ed49f00cd706d41a0099e5527d3b4a68e014de187029193fc1a1b1573f39744c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-589869",
	                        "0eccfe69a507"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589869 -n no-preload-589869
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589869 -n no-preload-589869: exit status 2 (370.321192ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-589869 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-589869 logs -n 25: (1.165013361s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ start   │ -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:41 UTC │ 18 Oct 25 09:42 UTC │
	│ ssh     │ cert-options-310417 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ ssh     │ -p cert-options-310417 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ delete  │ -p cert-options-310417                                                                                                                                                                                                                        │ cert-options-310417       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ stop    │ -p kubernetes-upgrade-919613                                                                                                                                                                                                                  │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ start   │ -p old-k8s-version-619885 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-919613 │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │                     │
	│ delete  │ -p missing-upgrade-631894                                                                                                                                                                                                                     │ missing-upgrade-631894    │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ start   │ -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-619885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	│ stop    │ -p old-k8s-version-619885 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-589869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	│ stop    │ -p no-preload-589869 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-619885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p old-k8s-version-619885 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:44 UTC │
	│ addons  │ enable dashboard -p no-preload-589869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:44 UTC │
	│ image   │ old-k8s-version-619885 image list --format=json                                                                                                                                                                                               │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ pause   │ -p old-k8s-version-619885 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ delete  │ -p old-k8s-version-619885                                                                                                                                                                                                                     │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p old-k8s-version-619885                                                                                                                                                                                                                     │ old-k8s-version-619885    │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ start   │ -p embed-certs-055175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-055175        │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ image   │ no-preload-589869 image list --format=json                                                                                                                                                                                                    │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ pause   │ -p no-preload-589869 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-589869         │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ start   │ -p cert-expiration-650496 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-650496    │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:44:41
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:44:41.368864  376816 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:44:41.369231  376816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:44:41.369239  376816 out.go:374] Setting ErrFile to fd 2...
	I1018 09:44:41.369248  376816 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:44:41.369587  376816 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:44:41.370168  376816 out.go:368] Setting JSON to false
	I1018 09:44:41.371839  376816 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5225,"bootTime":1760775456,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:44:41.371955  376816 start.go:141] virtualization: kvm guest
	I1018 09:44:41.374034  376816 out.go:179] * [cert-expiration-650496] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:44:41.375316  376816 notify.go:220] Checking for updates...
	I1018 09:44:41.375395  376816 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:44:41.376818  376816 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:44:41.378086  376816 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:44:41.379209  376816 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:44:41.380348  376816 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:44:41.381486  376816 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:44:41.383224  376816 config.go:182] Loaded profile config "cert-expiration-650496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:44:41.383921  376816 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:44:41.425604  376816 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:44:41.425712  376816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:44:41.502594  376816 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-18 09:44:41.489372736 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:44:41.502749  376816 docker.go:318] overlay module found
	I1018 09:44:41.507275  376816 out.go:179] * Using the docker driver based on existing profile
	I1018 09:44:41.508662  376816 start.go:305] selected driver: docker
	I1018 09:44:41.508673  376816 start.go:925] validating driver "docker" against &{Name:cert-expiration-650496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-650496 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:44:41.508779  376816 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:44:41.510529  376816 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:44:41.600978  376816 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-18 09:44:41.587377819 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:44:41.601513  376816 cni.go:84] Creating CNI manager for ""
	I1018 09:44:41.601607  376816 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:44:41.601671  376816 start.go:349] cluster config:
	{Name:cert-expiration-650496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-650496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1018 09:44:41.603438  376816 out.go:179] * Starting "cert-expiration-650496" primary control-plane node in "cert-expiration-650496" cluster
	I1018 09:44:41.604782  376816 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:44:41.605948  376816 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:44:41.607004  376816 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:44:41.607043  376816 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:44:41.607058  376816 cache.go:58] Caching tarball of preloaded images
	I1018 09:44:41.607135  376816 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:44:41.607150  376816 preload.go:233] Found /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:44:41.607157  376816 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:44:41.607251  376816 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/cert-expiration-650496/config.json ...
	I1018 09:44:41.629873  376816 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:44:41.629888  376816 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:44:41.629907  376816 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:44:41.629935  376816 start.go:360] acquireMachinesLock for cert-expiration-650496: {Name:mkff120a47d272c6a75e24f68f43639d7a715083 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:44:41.629987  376816 start.go:364] duration metric: took 35.6µs to acquireMachinesLock for "cert-expiration-650496"
	I1018 09:44:41.630000  376816 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:44:41.630003  376816 fix.go:54] fixHost starting: 
	I1018 09:44:41.630201  376816 cli_runner.go:164] Run: docker container inspect cert-expiration-650496 --format={{.State.Status}}
	I1018 09:44:41.648262  376816 fix.go:112] recreateIfNeeded on cert-expiration-650496: state=Running err=<nil>
	W1018 09:44:41.648291  376816 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:44:38.885934  373771 out.go:252]   - Booting up control plane ...
	I1018 09:44:38.886061  373771 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:44:38.886170  373771 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:44:38.886744  373771 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:44:38.901424  373771 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:44:38.901563  373771 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:44:38.908059  373771 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:44:38.908287  373771 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:44:38.908350  373771 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:44:39.007287  373771 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:44:39.007416  373771 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:44:39.509027  373771 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.911514ms
	I1018 09:44:39.512885  373771 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:44:39.513026  373771 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1018 09:44:39.513163  373771 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:44:39.513305  373771 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:44:41.467696  373771 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.953869176s
	I1018 09:44:41.587501  373771 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.074310907s
	I1018 09:44:43.515747  373771 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002849696s
	I1018 09:44:43.527338  373771 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:44:43.540385  373771 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:44:43.551378  373771 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:44:43.551706  373771 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-055175 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:44:43.567911  373771 kubeadm.go:318] [bootstrap-token] Using token: tentv2.1ixpeens3rm6qbo3
	I1018 09:44:39.196072  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:54026->192.168.85.2:8443: read: connection reset by peer
	I1018 09:44:39.196153  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:39.196218  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:39.224811  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:39.224864  353123 cri.go:89] found id: "10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:39.224870  353123 cri.go:89] found id: ""
	I1018 09:44:39.224880  353123 logs.go:282] 2 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a]
	I1018 09:44:39.224937  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:39.229034  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:39.232729  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:39.232794  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:39.259774  353123 cri.go:89] found id: ""
	I1018 09:44:39.259795  353123 logs.go:282] 0 containers: []
	W1018 09:44:39.259807  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:39.259814  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:39.259900  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:39.287523  353123 cri.go:89] found id: ""
	I1018 09:44:39.287547  353123 logs.go:282] 0 containers: []
	W1018 09:44:39.287559  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:39.287566  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:39.287625  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:39.315549  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:39.315576  353123 cri.go:89] found id: ""
	I1018 09:44:39.315587  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:39.315647  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:39.319633  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:39.319703  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:39.347364  353123 cri.go:89] found id: ""
	I1018 09:44:39.347391  353123 logs.go:282] 0 containers: []
	W1018 09:44:39.347400  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:39.347407  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:39.347465  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:39.377527  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:39.377552  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:39.377558  353123 cri.go:89] found id: ""
	I1018 09:44:39.377567  353123 logs.go:282] 2 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7]
	I1018 09:44:39.377635  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:39.383076  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:39.388189  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:39.388346  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:39.419411  353123 cri.go:89] found id: ""
	I1018 09:44:39.419442  353123 logs.go:282] 0 containers: []
	W1018 09:44:39.419453  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:39.419461  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:39.419556  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:39.447685  353123 cri.go:89] found id: ""
	I1018 09:44:39.447718  353123 logs.go:282] 0 containers: []
	W1018 09:44:39.447730  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:39.447748  353123 logs.go:123] Gathering logs for kube-apiserver [10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a] ...
	I1018 09:44:39.447768  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 10190b959391274621c0f2ef793d80e283c01c0f466eff70e72a39a883175a2a"
	I1018 09:44:39.479947  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:44:39.479976  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:39.506609  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:39.506643  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:39.554654  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:39.554693  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:39.584899  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:39.584928  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:39.670237  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:39.670273  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:39.716117  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:44:39.716152  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:39.743877  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:39.743908  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:39.762937  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:39.762965  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:39.823179  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:39.823217  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:44:39.823236  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:42.361892  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:42.362294  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:42.362343  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:42.362402  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:42.392720  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:42.392741  353123 cri.go:89] found id: ""
	I1018 09:44:42.392750  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:44:42.392807  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:42.397751  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:42.397817  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:42.430214  353123 cri.go:89] found id: ""
	I1018 09:44:42.430246  353123 logs.go:282] 0 containers: []
	W1018 09:44:42.430258  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:42.430266  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:42.430322  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:42.465365  353123 cri.go:89] found id: ""
	I1018 09:44:42.465391  353123 logs.go:282] 0 containers: []
	W1018 09:44:42.465403  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:42.465411  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:42.465475  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:42.499273  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:42.499302  353123 cri.go:89] found id: ""
	I1018 09:44:42.499312  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:42.499379  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:42.503876  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:42.503951  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:42.547284  353123 cri.go:89] found id: ""
	I1018 09:44:42.547315  353123 logs.go:282] 0 containers: []
	W1018 09:44:42.547327  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:42.547335  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:42.547399  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:42.583743  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:42.583768  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:42.583774  353123 cri.go:89] found id: ""
	I1018 09:44:42.583784  353123 logs.go:282] 2 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7]
	I1018 09:44:42.583866  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:42.588618  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:42.593628  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:42.593699  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:42.623477  353123 cri.go:89] found id: ""
	I1018 09:44:42.623504  353123 logs.go:282] 0 containers: []
	W1018 09:44:42.623514  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:42.623522  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:42.623579  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:42.656269  353123 cri.go:89] found id: ""
	I1018 09:44:42.656297  353123 logs.go:282] 0 containers: []
	W1018 09:44:42.656308  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:42.656328  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:42.656343  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:42.771661  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:42.771702  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:42.844403  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:42.844429  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:44:42.844444  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:42.880211  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:42.880243  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:42.943544  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:42.943599  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:42.968601  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:44:42.968635  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:43.009552  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:43.009595  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:43.074544  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:44:43.074578  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:43.111563  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:43.111604  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:41.649964  376816 out.go:252] * Updating the running docker "cert-expiration-650496" container ...
	I1018 09:44:41.649999  376816 machine.go:93] provisionDockerMachine start ...
	I1018 09:44:41.650071  376816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:44:41.669086  376816 main.go:141] libmachine: Using SSH client type: native
	I1018 09:44:41.669385  376816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1018 09:44:41.669391  376816 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:44:41.805719  376816 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-650496
	
	I1018 09:44:41.805739  376816 ubuntu.go:182] provisioning hostname "cert-expiration-650496"
	I1018 09:44:41.805799  376816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:44:41.824258  376816 main.go:141] libmachine: Using SSH client type: native
	I1018 09:44:41.824460  376816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1018 09:44:41.824467  376816 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-650496 && echo "cert-expiration-650496" | sudo tee /etc/hostname
	I1018 09:44:41.969079  376816 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-650496
	
	I1018 09:44:41.969161  376816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:44:41.987093  376816 main.go:141] libmachine: Using SSH client type: native
	I1018 09:44:41.987287  376816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1018 09:44:41.987298  376816 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-650496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-650496/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-650496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:44:42.126513  376816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:44:42.126532  376816 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:44:42.126546  376816 ubuntu.go:190] setting up certificates
	I1018 09:44:42.126561  376816 provision.go:84] configureAuth start
	I1018 09:44:42.126610  376816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-650496
	I1018 09:44:42.145133  376816 provision.go:143] copyHostCerts
	I1018 09:44:42.145192  376816 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:44:42.145208  376816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:44:42.145300  376816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:44:42.145414  376816 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:44:42.145418  376816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:44:42.145443  376816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:44:42.145505  376816 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:44:42.145508  376816 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:44:42.145528  376816 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:44:42.145586  376816 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-650496 san=[127.0.0.1 192.168.103.2 cert-expiration-650496 localhost minikube]
	I1018 09:44:42.198984  376816 provision.go:177] copyRemoteCerts
	I1018 09:44:42.199046  376816 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:44:42.199098  376816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:44:42.218506  376816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/cert-expiration-650496/id_rsa Username:docker}
	I1018 09:44:42.317289  376816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:44:42.334887  376816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1018 09:44:42.352473  376816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:44:42.370675  376816 provision.go:87] duration metric: took 244.102201ms to configureAuth
	I1018 09:44:42.370697  376816 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:44:42.370924  376816 config.go:182] Loaded profile config "cert-expiration-650496": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:44:42.371049  376816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:44:42.391935  376816 main.go:141] libmachine: Using SSH client type: native
	I1018 09:44:42.392156  376816 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1018 09:44:42.392172  376816 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:44:42.744015  376816 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:44:42.744031  376816 machine.go:96] duration metric: took 1.09402494s to provisionDockerMachine
	I1018 09:44:42.744043  376816 start.go:293] postStartSetup for "cert-expiration-650496" (driver="docker")
	I1018 09:44:42.744055  376816 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:44:42.744135  376816 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:44:42.744177  376816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:44:42.767506  376816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/cert-expiration-650496/id_rsa Username:docker}
	I1018 09:44:42.875654  376816 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:44:42.880274  376816 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:44:42.880294  376816 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:44:42.880306  376816 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:44:42.880371  376816 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:44:42.880478  376816 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:44:42.880605  376816 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:44:42.890245  376816 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:44:42.915414  376816 start.go:296] duration metric: took 171.353736ms for postStartSetup
	I1018 09:44:42.915497  376816 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:44:42.915533  376816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:44:42.938733  376816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/cert-expiration-650496/id_rsa Username:docker}
	I1018 09:44:43.047317  376816 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:44:43.053534  376816 fix.go:56] duration metric: took 1.423523112s for fixHost
	I1018 09:44:43.053550  376816 start.go:83] releasing machines lock for "cert-expiration-650496", held for 1.423556238s
	I1018 09:44:43.053684  376816 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-650496
	I1018 09:44:43.075945  376816 ssh_runner.go:195] Run: cat /version.json
	I1018 09:44:43.075988  376816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:44:43.076040  376816 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:44:43.076105  376816 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-650496
	I1018 09:44:43.097483  376816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/cert-expiration-650496/id_rsa Username:docker}
	I1018 09:44:43.101915  376816 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/cert-expiration-650496/id_rsa Username:docker}
	I1018 09:44:43.252998  376816 ssh_runner.go:195] Run: systemctl --version
	I1018 09:44:43.259609  376816 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:44:43.302389  376816 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:44:43.308368  376816 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:44:43.308430  376816 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:44:43.317813  376816 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:44:43.317890  376816 start.go:495] detecting cgroup driver to use...
	I1018 09:44:43.317926  376816 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:44:43.317975  376816 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:44:43.333377  376816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:44:43.346154  376816 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:44:43.346194  376816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:44:43.363368  376816 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:44:43.377354  376816 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:44:43.514335  376816 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:44:43.657798  376816 docker.go:234] disabling docker service ...
	I1018 09:44:43.657870  376816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:44:43.678159  376816 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:44:43.692359  376816 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:44:43.826143  376816 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:44:43.973206  376816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:44:43.986985  376816 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:44:44.003723  376816 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:44:44.003792  376816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:44:44.013397  376816 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:44:44.013452  376816 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:44:44.024409  376816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:44:44.034323  376816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:44:44.043915  376816 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:44:44.053974  376816 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:44:44.064864  376816 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:44:44.074337  376816 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:44:44.085116  376816 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:44:44.094943  376816 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:44:44.104149  376816 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:44:44.279341  376816 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:44:44.452176  376816 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:44:44.452258  376816 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:44:44.457556  376816 start.go:563] Will wait 60s for crictl version
	I1018 09:44:44.457615  376816 ssh_runner.go:195] Run: which crictl
	I1018 09:44:44.462068  376816 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:44:44.488704  376816 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:44:44.488763  376816 ssh_runner.go:195] Run: crio --version
	I1018 09:44:44.520723  376816 ssh_runner.go:195] Run: crio --version
	I1018 09:44:44.552628  376816 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:44:43.569230  373771 out.go:252]   - Configuring RBAC rules ...
	I1018 09:44:43.569357  373771 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:44:43.573243  373771 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:44:43.578885  373771 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:44:43.581745  373771 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:44:43.585530  373771 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:44:43.588105  373771 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:44:43.922658  373771 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:44:44.341726  373771 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:44:44.923697  373771 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:44:44.924922  373771 kubeadm.go:318] 
	I1018 09:44:44.925032  373771 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:44:44.925050  373771 kubeadm.go:318] 
	I1018 09:44:44.925169  373771 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:44:44.925185  373771 kubeadm.go:318] 
	I1018 09:44:44.925213  373771 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:44:44.925295  373771 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:44:44.925372  373771 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:44:44.925383  373771 kubeadm.go:318] 
	I1018 09:44:44.925468  373771 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:44:44.925479  373771 kubeadm.go:318] 
	I1018 09:44:44.925553  373771 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:44:44.925564  373771 kubeadm.go:318] 
	I1018 09:44:44.925645  373771 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:44:44.925732  373771 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:44:44.925858  373771 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:44:44.925870  373771 kubeadm.go:318] 
	I1018 09:44:44.925981  373771 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:44:44.926089  373771 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:44:44.926095  373771 kubeadm.go:318] 
	I1018 09:44:44.926213  373771 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token tentv2.1ixpeens3rm6qbo3 \
	I1018 09:44:44.926393  373771 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f \
	I1018 09:44:44.926425  373771 kubeadm.go:318] 	--control-plane 
	I1018 09:44:44.926434  373771 kubeadm.go:318] 
	I1018 09:44:44.926557  373771 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:44:44.926568  373771 kubeadm.go:318] 
	I1018 09:44:44.926675  373771 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token tentv2.1ixpeens3rm6qbo3 \
	I1018 09:44:44.926835  373771 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f 
	I1018 09:44:44.930027  373771 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:44:44.930192  373771 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:44:44.930222  373771 cni.go:84] Creating CNI manager for ""
	I1018 09:44:44.930233  373771 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:44:44.932465  373771 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Oct 18 09:44:06 no-preload-589869 crio[566]: time="2025-10-18T09:44:06.106192883Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:44:06 no-preload-589869 crio[566]: time="2025-10-18T09:44:06.109551471Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:44:06 no-preload-589869 crio[566]: time="2025-10-18T09:44:06.109573284Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.347465993Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cb67ab31-b64f-4827-8025-3d6870bba1d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.348511559Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8f0b0a29-c178-4448-8275-f8fa71fbe7b0 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.349588169Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm/dashboard-metrics-scraper" id=fa9a6802-1d87-4eef-8de9-631bcd68140e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.34989974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.355485904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.356060496Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.395853131Z" level=info msg="Created container 396745a65f0a79c24e427c79e33348ceee5c447ada1cb126684dc3244b1126c1: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm/dashboard-metrics-scraper" id=fa9a6802-1d87-4eef-8de9-631bcd68140e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.396474991Z" level=info msg="Starting container: 396745a65f0a79c24e427c79e33348ceee5c447ada1cb126684dc3244b1126c1" id=c9036589-d4b8-4320-8a1f-1ccff4406e8a name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.398385638Z" level=info msg="Started container" PID=1741 containerID=396745a65f0a79c24e427c79e33348ceee5c447ada1cb126684dc3244b1126c1 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm/dashboard-metrics-scraper id=c9036589-d4b8-4320-8a1f-1ccff4406e8a name=/runtime.v1.RuntimeService/StartContainer sandboxID=c8dd809b3821458bbc103ca3e998df5896396f25425b99344ab31a2c8b4fcbf1
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.458904633Z" level=info msg="Removing container: 0d2b5b73f184eb954995efc8b2c9520141cc4f7d1e35f0438a63e88ce5832e20" id=ee0efc63-8d14-4255-8404-d48857677229 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:44:25 no-preload-589869 crio[566]: time="2025-10-18T09:44:25.470471869Z" level=info msg="Removed container 0d2b5b73f184eb954995efc8b2c9520141cc4f7d1e35f0438a63e88ce5832e20: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm/dashboard-metrics-scraper" id=ee0efc63-8d14-4255-8404-d48857677229 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.463362679Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=580b6898-19e6-4b67-81ac-205a79b7cfaa name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.492075394Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a11d06e4-ab21-4929-b810-4c183901023f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.552216492Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2cab525b-17b4-4635-b03a-25cfb1f0b505 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.552493283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.63690268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.637116859Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/0bf37c4841aa483945b7a02ef5f9b25bd89d94184a48ae5a170c17f1b33c9be9/merged/etc/passwd: no such file or directory"
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.637145117Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0bf37c4841aa483945b7a02ef5f9b25bd89d94184a48ae5a170c17f1b33c9be9/merged/etc/group: no such file or directory"
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.637431983Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.714162996Z" level=info msg="Created container 058fe5ecd4e4b4f0d36852f542051c7ed0d450b328689d99161e077efe1a7adc: kube-system/storage-provisioner/storage-provisioner" id=2cab525b-17b4-4635-b03a-25cfb1f0b505 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.714930849Z" level=info msg="Starting container: 058fe5ecd4e4b4f0d36852f542051c7ed0d450b328689d99161e077efe1a7adc" id=88173215-e198-4620-b634-f4f9dc33e1d0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:44:26 no-preload-589869 crio[566]: time="2025-10-18T09:44:26.717249386Z" level=info msg="Started container" PID=1755 containerID=058fe5ecd4e4b4f0d36852f542051c7ed0d450b328689d99161e077efe1a7adc description=kube-system/storage-provisioner/storage-provisioner id=88173215-e198-4620-b634-f4f9dc33e1d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=815a103561fddbda0a2ceb9c79a986bfdfecc6cc53a97284c1ef0c14d44e8dc7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	058fe5ecd4e4b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   815a103561fdd       storage-provisioner                          kube-system
	396745a65f0a7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   2                   c8dd809b38214       dashboard-metrics-scraper-6ffb444bf9-wtprm   kubernetes-dashboard
	147f4581c55b5       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   3a646b16cdff5       kubernetes-dashboard-855c9754f9-cckhv        kubernetes-dashboard
	1a10a488ac761       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   a72b6301d8a91       coredns-66bc5c9577-pck54                     kube-system
	376d9ae981623       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   a6e5ce289feda       busybox                                      default
	5c7847fab0c84       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   815a103561fdd       storage-provisioner                          kube-system
	6776f5211a0e8       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   c2a136ae99ecb       kindnet-zjqmf                                kube-system
	f16f92d94527f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   a6d523f23a5a1       kube-proxy-45kpn                             kube-system
	8ea25fde146e8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   17ab159cf8b9a       kube-controller-manager-no-preload-589869    kube-system
	e90a7d734d675       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   5d235957bf4a0       etcd-no-preload-589869                       kube-system
	3021ebf25ee25       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   0f4d7162119df       kube-scheduler-no-preload-589869             kube-system
	365f44dae4ed2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   059984b318bbc       kube-apiserver-no-preload-589869             kube-system
	
	
	==> coredns [1a10a488ac76179f6a9ca2e828262111d75fcf676bda59f5aaf0c6f715a6e6c1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54635 - 23773 "HINFO IN 7698436634749166641.1414637754520399092. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022413161s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-589869
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-589869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=no-preload-589869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_42_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:42:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-589869
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:44:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:44:25 +0000   Sat, 18 Oct 2025 09:42:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:44:25 +0000   Sat, 18 Oct 2025 09:42:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:44:25 +0000   Sat, 18 Oct 2025 09:42:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:44:25 +0000   Sat, 18 Oct 2025 09:43:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-589869
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                6a71982a-ecb5-4a3a-b089-e736cb5f928f
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-pck54                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-no-preload-589869                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-zjqmf                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-no-preload-589869              250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-no-preload-589869     200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-45kpn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-no-preload-589869              100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-wtprm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-cckhv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  111s               kubelet          Node no-preload-589869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    111s               kubelet          Node no-preload-589869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     111s               kubelet          Node no-preload-589869 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node no-preload-589869 event: Registered Node no-preload-589869 in Controller
	  Normal  NodeReady                92s                kubelet          Node no-preload-589869 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node no-preload-589869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node no-preload-589869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node no-preload-589869 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node no-preload-589869 event: Registered Node no-preload-589869 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [e90a7d734d6758358dd228647088e66ec6aa6cfda7ad58d83ee3410a00ea8756] <==
	{"level":"warn","ts":"2025-10-18T09:43:54.475511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:43:54.494997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:43:54.498263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:43:54.505329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:43:54.512375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:43:54.570183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34412","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:44:26.635059Z","caller":"traceutil/trace.go:172","msg":"trace[1436335138] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"164.905871ms","start":"2025-10-18T09:44:26.470133Z","end":"2025-10-18T09:44:26.635039Z","steps":["trace[1436335138] 'process raft request'  (duration: 164.784597ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:44:26.847799Z","caller":"traceutil/trace.go:172","msg":"trace[357102648] linearizableReadLoop","detail":"{readStateIndex:652; appliedIndex:652; }","duration":"106.711546ms","start":"2025-10-18T09:44:26.741062Z","end":"2025-10-18T09:44:26.847773Z","steps":["trace[357102648] 'read index received'  (duration: 106.704025ms)","trace[357102648] 'applied index is now lower than readState.Index'  (duration: 6.524µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:44:26.862698Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.614971ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1117"}
	{"level":"info","ts":"2025-10-18T09:44:26.862808Z","caller":"traceutil/trace.go:172","msg":"trace[1445016313] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:618; }","duration":"121.724952ms","start":"2025-10-18T09:44:26.741051Z","end":"2025-10-18T09:44:26.862776Z","steps":["trace[1445016313] 'agreement among raft nodes before linearized reading'  (duration: 106.791268ms)","trace[1445016313] 'range keys from in-memory index tree'  (duration: 14.735602ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:44:26.862871Z","caller":"traceutil/trace.go:172","msg":"trace[2128761252] transaction","detail":"{read_only:false; response_revision:619; number_of_response:1; }","duration":"144.271421ms","start":"2025-10-18T09:44:26.718515Z","end":"2025-10-18T09:44:26.862787Z","steps":["trace[2128761252] 'process raft request'  (duration: 129.287013ms)","trace[2128761252] 'compare'  (duration: 14.875115ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:44:27.583396Z","caller":"traceutil/trace.go:172","msg":"trace[168689661] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"200.671861ms","start":"2025-10-18T09:44:27.382709Z","end":"2025-10-18T09:44:27.583380Z","steps":["trace[168689661] 'process raft request'  (duration: 200.550824ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:44:27.713330Z","caller":"traceutil/trace.go:172","msg":"trace[149135860] linearizableReadLoop","detail":"{readStateIndex:655; appliedIndex:655; }","duration":"109.953631ms","start":"2025-10-18T09:44:27.603349Z","end":"2025-10-18T09:44:27.713302Z","steps":["trace[149135860] 'read index received'  (duration: 109.945154ms)","trace[149135860] 'applied index is now lower than readState.Index'  (duration: 7.335µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:44:27.722209Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.835399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-pck54\" limit:1 ","response":"range_response_count:1 size:5755"}
	{"level":"info","ts":"2025-10-18T09:44:27.722276Z","caller":"traceutil/trace.go:172","msg":"trace[1144694429] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-pck54; range_end:; response_count:1; response_revision:621; }","duration":"118.915874ms","start":"2025-10-18T09:44:27.603339Z","end":"2025-10-18T09:44:27.722255Z","steps":["trace[1144694429] 'agreement among raft nodes before linearized reading'  (duration: 110.048677ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:44:27.722289Z","caller":"traceutil/trace.go:172","msg":"trace[1317608009] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"134.544313ms","start":"2025-10-18T09:44:27.587731Z","end":"2025-10-18T09:44:27.722275Z","steps":["trace[1317608009] 'process raft request'  (duration: 125.688933ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:44:27.722281Z","caller":"traceutil/trace.go:172","msg":"trace[1191704746] transaction","detail":"{read_only:false; response_revision:623; number_of_response:1; }","duration":"134.52482ms","start":"2025-10-18T09:44:27.587742Z","end":"2025-10-18T09:44:27.722267Z","steps":["trace[1191704746] 'process raft request'  (duration: 134.486792ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:44:27.772541Z","caller":"traceutil/trace.go:172","msg":"trace[1604303708] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"183.164846ms","start":"2025-10-18T09:44:27.589356Z","end":"2025-10-18T09:44:27.772521Z","steps":["trace[1604303708] 'process raft request'  (duration: 183.038779ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:44:27.772705Z","caller":"traceutil/trace.go:172","msg":"trace[2138093964] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"180.93727ms","start":"2025-10-18T09:44:27.591751Z","end":"2025-10-18T09:44:27.772688Z","steps":["trace[2138093964] 'process raft request'  (duration: 180.740091ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:44:27.900478Z","caller":"traceutil/trace.go:172","msg":"trace[169008884] linearizableReadLoop","detail":"{readStateIndex:659; appliedIndex:659; }","duration":"119.092627ms","start":"2025-10-18T09:44:27.781363Z","end":"2025-10-18T09:44:27.900456Z","steps":["trace[169008884] 'read index received'  (duration: 119.086698ms)","trace[169008884] 'applied index is now lower than readState.Index'  (duration: 5.17µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:44:27.932712Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"151.328152ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-589869\" limit:1 ","response":"range_response_count:1 size:5235"}
	{"level":"info","ts":"2025-10-18T09:44:27.932757Z","caller":"traceutil/trace.go:172","msg":"trace[225141672] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"152.328374ms","start":"2025-10-18T09:44:27.780410Z","end":"2025-10-18T09:44:27.932738Z","steps":["trace[225141672] 'process raft request'  (duration: 120.110277ms)","trace[225141672] 'compare'  (duration: 32.101785ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:44:27.932815Z","caller":"traceutil/trace.go:172","msg":"trace[1201379552] range","detail":"{range_begin:/registry/minions/no-preload-589869; range_end:; response_count:1; response_revision:625; }","duration":"151.394565ms","start":"2025-10-18T09:44:27.781359Z","end":"2025-10-18T09:44:27.932754Z","steps":["trace[1201379552] 'agreement among raft nodes before linearized reading'  (duration: 119.168614ms)","trace[1201379552] 'range keys from in-memory index tree'  (duration: 32.076266ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:44:28.304854Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.427028ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-45kpn\" limit:1 ","response":"range_response_count:1 size:5043"}
	{"level":"info","ts":"2025-10-18T09:44:28.304921Z","caller":"traceutil/trace.go:172","msg":"trace[689393160] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-45kpn; range_end:; response_count:1; response_revision:626; }","duration":"100.539027ms","start":"2025-10-18T09:44:28.204368Z","end":"2025-10-18T09:44:28.304907Z","steps":["trace[689393160] 'range keys from in-memory index tree'  (duration: 100.279057ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:44:46 up  1:27,  0 user,  load average: 2.26, 2.77, 1.80
	Linux no-preload-589869 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6776f5211a0e843c931b1ce36383a5f28d8bde46797fd60263b1ece94b78cabc] <==
	I1018 09:43:55.887079       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:43:55.887318       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1018 09:43:55.887472       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:43:55.887487       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:43:55.887505       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:43:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:43:56.090482       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:43:56.090518       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:43:56.090533       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:43:56.090699       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:43:56.521157       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:43:56.521180       1 metrics.go:72] Registering metrics
	I1018 09:43:56.521244       1 controller.go:711] "Syncing nftables rules"
	I1018 09:44:06.090699       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:44:06.090751       1 main.go:301] handling current node
	I1018 09:44:16.090947       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:44:16.090977       1 main.go:301] handling current node
	I1018 09:44:26.091141       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:44:26.091177       1 main.go:301] handling current node
	I1018 09:44:36.094882       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:44:36.094909       1 main.go:301] handling current node
	I1018 09:44:46.099941       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:44:46.099971       1 main.go:301] handling current node
	
	
	==> kube-apiserver [365f44dae4ed2d0a509b24fe9019127a3b886f7165b236fedf613caabda9e161] <==
	I1018 09:43:55.045278       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:43:55.045075       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1018 09:43:55.045406       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:43:55.045574       1 policy_source.go:240] refreshing policies
	I1018 09:43:55.045100       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:43:55.045614       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:43:55.045620       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:43:55.045626       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:43:55.045140       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 09:43:55.045690       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:43:55.045166       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:43:55.051128       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:43:55.056798       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1018 09:43:55.064937       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:43:55.343898       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:43:55.365278       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:43:55.388177       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:43:55.415339       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:43:55.425061       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:43:55.475548       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.151.147"}
	I1018 09:43:55.483925       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.164.57"}
	I1018 09:43:55.947528       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:43:58.649745       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:43:58.951357       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:43:59.001417       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8ea25fde146e8e96a504e7f7eaa6b8b321ddf9a82560cd19db582cffc49f48b2] <==
	I1018 09:43:58.370512       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:43:58.394091       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 09:43:58.394574       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:43:58.395570       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:43:58.395713       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:43:58.395764       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:43:58.395782       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:43:58.396014       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:43:58.396033       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:43:58.396017       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:43:58.396017       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:43:58.396112       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:43:58.397505       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:43:58.400795       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 09:43:58.400997       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:43:58.401028       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:43:58.401036       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:43:58.401044       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:43:58.401184       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:43:58.401218       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 09:43:58.412149       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:43:58.418348       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:43:58.418367       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:43:58.418375       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:43:58.426444       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f16f92d94527f39749d0ce08e163418380fcaf097f1715e466a624f2a016601a] <==
	I1018 09:43:55.775708       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:43:55.831404       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:43:55.931854       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:43:55.931890       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1018 09:43:55.931963       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:43:55.951619       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:43:55.951672       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:43:55.957617       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:43:55.958287       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:43:55.958354       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:43:55.960954       1 config.go:200] "Starting service config controller"
	I1018 09:43:55.960974       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:43:55.961008       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:43:55.961015       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:43:55.961035       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:43:55.961040       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:43:55.961280       1 config.go:309] "Starting node config controller"
	I1018 09:43:55.961297       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:43:55.961305       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:43:56.061808       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:43:56.061845       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:43:56.061876       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3021ebf25ee25e7930a9131ff2cdf54e1638a03d0f603c0186e607f5bf6ea827] <==
	I1018 09:43:54.070344       1 serving.go:386] Generated self-signed cert in-memory
	I1018 09:43:55.012568       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:43:55.012592       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:43:55.017284       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1018 09:43:55.017316       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1018 09:43:55.017373       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:43:55.017398       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:43:55.017410       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:43:55.017467       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1018 09:43:55.017768       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:43:55.017851       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:43:55.117524       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1018 09:43:55.117513       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:43:55.117703       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:43:55 no-preload-589869 kubelet[712]: I1018 09:43:55.464062     712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9c851a2c-8320-45ae-9c2f-3f60bc0401c8-tmp\") pod \"storage-provisioner\" (UID: \"9c851a2c-8320-45ae-9c2f-3f60bc0401c8\") " pod="kube-system/storage-provisioner"
	Oct 18 09:43:58 no-preload-589869 kubelet[712]: I1018 09:43:58.984366     712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgdv4\" (UniqueName: \"kubernetes.io/projected/3a9478e4-6026-4abd-9276-ffd01cf7b5ff-kube-api-access-wgdv4\") pod \"dashboard-metrics-scraper-6ffb444bf9-wtprm\" (UID: \"3a9478e4-6026-4abd-9276-ffd01cf7b5ff\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm"
	Oct 18 09:43:58 no-preload-589869 kubelet[712]: I1018 09:43:58.984442     712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8f48c99e-2020-467e-951d-38d637d68c79-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-cckhv\" (UID: \"8f48c99e-2020-467e-951d-38d637d68c79\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cckhv"
	Oct 18 09:43:58 no-preload-589869 kubelet[712]: I1018 09:43:58.984476     712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3a9478e4-6026-4abd-9276-ffd01cf7b5ff-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-wtprm\" (UID: \"3a9478e4-6026-4abd-9276-ffd01cf7b5ff\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm"
	Oct 18 09:43:58 no-preload-589869 kubelet[712]: I1018 09:43:58.984501     712 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnxj6\" (UniqueName: \"kubernetes.io/projected/8f48c99e-2020-467e-951d-38d637d68c79-kube-api-access-dnxj6\") pod \"kubernetes-dashboard-855c9754f9-cckhv\" (UID: \"8f48c99e-2020-467e-951d-38d637d68c79\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cckhv"
	Oct 18 09:44:02 no-preload-589869 kubelet[712]: I1018 09:44:02.393321     712 scope.go:117] "RemoveContainer" containerID="b5585fc5c98f760e5ff9575e79132ec8aeb47ce8371a44fff4fe1b14192d2fb2"
	Oct 18 09:44:03 no-preload-589869 kubelet[712]: I1018 09:44:03.397621     712 scope.go:117] "RemoveContainer" containerID="b5585fc5c98f760e5ff9575e79132ec8aeb47ce8371a44fff4fe1b14192d2fb2"
	Oct 18 09:44:03 no-preload-589869 kubelet[712]: I1018 09:44:03.397775     712 scope.go:117] "RemoveContainer" containerID="0d2b5b73f184eb954995efc8b2c9520141cc4f7d1e35f0438a63e88ce5832e20"
	Oct 18 09:44:03 no-preload-589869 kubelet[712]: E1018 09:44:03.397985     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wtprm_kubernetes-dashboard(3a9478e4-6026-4abd-9276-ffd01cf7b5ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm" podUID="3a9478e4-6026-4abd-9276-ffd01cf7b5ff"
	Oct 18 09:44:04 no-preload-589869 kubelet[712]: I1018 09:44:04.402478     712 scope.go:117] "RemoveContainer" containerID="0d2b5b73f184eb954995efc8b2c9520141cc4f7d1e35f0438a63e88ce5832e20"
	Oct 18 09:44:04 no-preload-589869 kubelet[712]: E1018 09:44:04.402693     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wtprm_kubernetes-dashboard(3a9478e4-6026-4abd-9276-ffd01cf7b5ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm" podUID="3a9478e4-6026-4abd-9276-ffd01cf7b5ff"
	Oct 18 09:44:06 no-preload-589869 kubelet[712]: I1018 09:44:06.419895     712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cckhv" podStartSLOduration=2.167720227 podStartE2EDuration="8.4198764s" podCreationTimestamp="2025-10-18 09:43:58 +0000 UTC" firstStartedPulling="2025-10-18 09:43:59.250377998 +0000 UTC m=+7.003683629" lastFinishedPulling="2025-10-18 09:44:05.502534172 +0000 UTC m=+13.255839802" observedRunningTime="2025-10-18 09:44:06.419688554 +0000 UTC m=+14.172994209" watchObservedRunningTime="2025-10-18 09:44:06.4198764 +0000 UTC m=+14.173182038"
	Oct 18 09:44:10 no-preload-589869 kubelet[712]: I1018 09:44:10.583508     712 scope.go:117] "RemoveContainer" containerID="0d2b5b73f184eb954995efc8b2c9520141cc4f7d1e35f0438a63e88ce5832e20"
	Oct 18 09:44:10 no-preload-589869 kubelet[712]: E1018 09:44:10.583697     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wtprm_kubernetes-dashboard(3a9478e4-6026-4abd-9276-ffd01cf7b5ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm" podUID="3a9478e4-6026-4abd-9276-ffd01cf7b5ff"
	Oct 18 09:44:25 no-preload-589869 kubelet[712]: I1018 09:44:25.346923     712 scope.go:117] "RemoveContainer" containerID="0d2b5b73f184eb954995efc8b2c9520141cc4f7d1e35f0438a63e88ce5832e20"
	Oct 18 09:44:25 no-preload-589869 kubelet[712]: I1018 09:44:25.456978     712 scope.go:117] "RemoveContainer" containerID="0d2b5b73f184eb954995efc8b2c9520141cc4f7d1e35f0438a63e88ce5832e20"
	Oct 18 09:44:25 no-preload-589869 kubelet[712]: I1018 09:44:25.457433     712 scope.go:117] "RemoveContainer" containerID="396745a65f0a79c24e427c79e33348ceee5c447ada1cb126684dc3244b1126c1"
	Oct 18 09:44:25 no-preload-589869 kubelet[712]: E1018 09:44:25.457661     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wtprm_kubernetes-dashboard(3a9478e4-6026-4abd-9276-ffd01cf7b5ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm" podUID="3a9478e4-6026-4abd-9276-ffd01cf7b5ff"
	Oct 18 09:44:26 no-preload-589869 kubelet[712]: I1018 09:44:26.463021     712 scope.go:117] "RemoveContainer" containerID="5c7847fab0c84055a61d76d77e3f30eda6e50b2bf7320ca686c85044d8e30921"
	Oct 18 09:44:30 no-preload-589869 kubelet[712]: I1018 09:44:30.584301     712 scope.go:117] "RemoveContainer" containerID="396745a65f0a79c24e427c79e33348ceee5c447ada1cb126684dc3244b1126c1"
	Oct 18 09:44:30 no-preload-589869 kubelet[712]: E1018 09:44:30.584482     712 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-wtprm_kubernetes-dashboard(3a9478e4-6026-4abd-9276-ffd01cf7b5ff)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-wtprm" podUID="3a9478e4-6026-4abd-9276-ffd01cf7b5ff"
	Oct 18 09:44:41 no-preload-589869 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:44:41 no-preload-589869 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:44:41 no-preload-589869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:44:41 no-preload-589869 systemd[1]: kubelet.service: Consumed 1.553s CPU time.
	
	
	==> kubernetes-dashboard [147f4581c55b56755c7f6628078a265f0b5089ea5e8a4bc9c6409a719020f372] <==
	2025/10/18 09:44:05 Using namespace: kubernetes-dashboard
	2025/10/18 09:44:05 Using in-cluster config to connect to apiserver
	2025/10/18 09:44:05 Using secret token for csrf signing
	2025/10/18 09:44:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:44:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:44:05 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:44:05 Generating JWE encryption key
	2025/10/18 09:44:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:44:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:44:05 Initializing JWE encryption key from synchronized object
	2025/10/18 09:44:05 Creating in-cluster Sidecar client
	2025/10/18 09:44:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:44:05 Serving insecurely on HTTP port: 9090
	2025/10/18 09:44:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:44:05 Starting overwatch
	
	
	==> storage-provisioner [058fe5ecd4e4b4f0d36852f542051c7ed0d450b328689d99161e077efe1a7adc] <==
	I1018 09:44:26.731637       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:44:26.739717       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:44:26.739765       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:44:26.863993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:44:30.318762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:44:34.578767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:44:38.177863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:44:41.231476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:44:44.254666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:44:44.259582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:44:44.259767       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:44:44.259940       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-589869_265f0382-a280-4db1-8d6b-41ec87cf068e!
	I1018 09:44:44.259998       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ffa4ca64-af5f-429e-8808-12f7378aafdf", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-589869_265f0382-a280-4db1-8d6b-41ec87cf068e became leader
	W1018 09:44:44.261684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:44:44.266729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:44:44.360171       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-589869_265f0382-a280-4db1-8d6b-41ec87cf068e!
	W1018 09:44:46.269860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:44:46.274127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [5c7847fab0c84055a61d76d77e3f30eda6e50b2bf7320ca686c85044d8e30921] <==
	I1018 09:43:55.742572       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:44:25.747303       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-589869 -n no-preload-589869
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-589869 -n no-preload-589869: exit status 2 (336.220493ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-589869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-055175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-055175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (276.170292ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:45:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-055175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-055175 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-055175 describe deploy/metrics-server -n kube-system: exit status 1 (77.482344ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-055175 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-055175
helpers_test.go:243: (dbg) docker inspect embed-certs-055175:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a",
	        "Created": "2025-10-18T09:44:28.71602918Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 374574,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:44:28.753421885Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a/hosts",
	        "LogPath": "/var/lib/docker/containers/7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a/7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a-json.log",
	        "Name": "/embed-certs-055175",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-055175:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-055175",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a",
	                "LowerDir": "/var/lib/docker/overlay2/531bfa693b92ead6b3f8f81dfe6ceee18d64c33ff7b1620bdbea0221492660a0-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/531bfa693b92ead6b3f8f81dfe6ceee18d64c33ff7b1620bdbea0221492660a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/531bfa693b92ead6b3f8f81dfe6ceee18d64c33ff7b1620bdbea0221492660a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/531bfa693b92ead6b3f8f81dfe6ceee18d64c33ff7b1620bdbea0221492660a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-055175",
	                "Source": "/var/lib/docker/volumes/embed-certs-055175/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-055175",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-055175",
	                "name.minikube.sigs.k8s.io": "embed-certs-055175",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "88213dfe8fff75570935f15af811867a62550c0132d874cba2f72c3f6e39d64f",
	            "SandboxKey": "/var/run/docker/netns/88213dfe8fff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33201"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33205"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33203"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33204"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-055175": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:67:a6:b4:1c:81",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7d2dbeb8dc9f32aa321be9871888fc0b62950b6ca92410878ff116152ea346c2",
	                    "EndpointID": "d84e874261c79df4b360531dd4ec3cc569cda357b60294d2a51a9e0aeda94506",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-055175",
	                        "7ab18617f15c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-055175 -n embed-certs-055175
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-055175 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-055175 logs -n 25: (1.161416467s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p missing-upgrade-631894                                                                                                                                                                                                                     │ missing-upgrade-631894       │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:42 UTC │
	│ start   │ -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:42 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-619885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	│ stop    │ -p old-k8s-version-619885 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-589869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	│ stop    │ -p no-preload-589869 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-619885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p old-k8s-version-619885 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:44 UTC │
	│ addons  │ enable dashboard -p no-preload-589869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:44 UTC │
	│ image   │ old-k8s-version-619885 image list --format=json                                                                                                                                                                                               │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ pause   │ -p old-k8s-version-619885 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ delete  │ -p old-k8s-version-619885                                                                                                                                                                                                                     │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p old-k8s-version-619885                                                                                                                                                                                                                     │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ start   │ -p embed-certs-055175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:45 UTC │
	│ image   │ no-preload-589869 image list --format=json                                                                                                                                                                                                    │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ pause   │ -p no-preload-589869 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ start   │ -p cert-expiration-650496 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-650496       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p no-preload-589869                                                                                                                                                                                                                          │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p cert-expiration-650496                                                                                                                                                                                                                     │ cert-expiration-650496       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p no-preload-589869                                                                                                                                                                                                                          │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p disable-driver-mounts-399936                                                                                                                                                                                                               │ disable-driver-mounts-399936 │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ start   │ -p default-k8s-diff-port-942905 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ start   │ -p newest-cni-708733 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-055175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:44:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:44:50.689962  381291 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:44:50.690289  381291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:44:50.690297  381291 out.go:374] Setting ErrFile to fd 2...
	I1018 09:44:50.690303  381291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:44:50.690624  381291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:44:50.691332  381291 out.go:368] Setting JSON to false
	I1018 09:44:50.692657  381291 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5235,"bootTime":1760775456,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:44:50.692760  381291 start.go:141] virtualization: kvm guest
	I1018 09:44:50.694521  381291 out.go:179] * [newest-cni-708733] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:44:50.695818  381291 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:44:50.695836  381291 notify.go:220] Checking for updates...
	I1018 09:44:50.697124  381291 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:44:50.698668  381291 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:44:50.700646  381291 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:44:50.701958  381291 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:44:50.703380  381291 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:44:50.655938  381160 start.go:305] selected driver: docker
	I1018 09:44:50.655957  381160 start.go:925] validating driver "docker" against <nil>
	I1018 09:44:50.655968  381160 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:44:50.656543  381160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:44:50.723181  381160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 09:44:50.711410722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:44:50.723423  381160 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:44:50.723752  381160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:44:50.727098  381160 out.go:179] * Using Docker driver with root privileges
	I1018 09:44:50.728279  381160 cni.go:84] Creating CNI manager for ""
	I1018 09:44:50.728370  381160 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:44:50.728388  381160 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:44:50.728469  381160 start.go:349] cluster config:
	{Name:default-k8s-diff-port-942905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-942905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:44:50.730134  381160 out.go:179] * Starting "default-k8s-diff-port-942905" primary control-plane node in "default-k8s-diff-port-942905" cluster
	I1018 09:44:50.731319  381160 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:44:50.732546  381160 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:44:50.733542  381160 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:44:50.733576  381160 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:44:50.733584  381160 cache.go:58] Caching tarball of preloaded images
	I1018 09:44:50.733638  381160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:44:50.733673  381160 preload.go:233] Found /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:44:50.733685  381160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:44:50.733790  381160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/config.json ...
	I1018 09:44:50.733811  381160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/config.json: {Name:mk9ab3c164f844e1cc3bc862b6f6cb43b25e383b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:44:50.756198  381160 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:44:50.756227  381160 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:44:50.756243  381160 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:44:50.756272  381160 start.go:360] acquireMachinesLock for default-k8s-diff-port-942905: {Name:mk8b7fe5fa5304418be28440581999707ea8535f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:44:50.756386  381160 start.go:364] duration metric: took 90.378µs to acquireMachinesLock for "default-k8s-diff-port-942905"
	I1018 09:44:50.756417  381160 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-942905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-942905 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:44:50.756498  381160 start.go:125] createHost starting for "" (driver="docker")
	I1018 09:44:50.705612  381291 config.go:182] Loaded profile config "embed-certs-055175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:44:50.705746  381291 config.go:182] Loaded profile config "kubernetes-upgrade-919613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:44:50.705896  381291 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:44:50.731967  381291 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:44:50.732095  381291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:44:50.795538  381291 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-18 09:44:50.785466804 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:44:50.795667  381291 docker.go:318] overlay module found
	I1018 09:44:50.797214  381291 out.go:179] * Using the docker driver based on user configuration
	I1018 09:44:50.798354  381291 start.go:305] selected driver: docker
	I1018 09:44:50.798368  381291 start.go:925] validating driver "docker" against <nil>
	I1018 09:44:50.798381  381291 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:44:50.799159  381291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:44:50.860410  381291 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:false NGoroutines:69 SystemTime:2025-10-18 09:44:50.848302273 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:44:50.860623  381291 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1018 09:44:50.860665  381291 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1018 09:44:50.860957  381291 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:44:50.862844  381291 out.go:179] * Using Docker driver with root privileges
	I1018 09:44:50.864893  381291 cni.go:84] Creating CNI manager for ""
	I1018 09:44:50.864958  381291 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:44:50.864969  381291 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:44:50.865027  381291 start.go:349] cluster config:
	{Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:44:50.866389  381291 out.go:179] * Starting "newest-cni-708733" primary control-plane node in "newest-cni-708733" cluster
	I1018 09:44:50.868222  381291 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:44:50.869335  381291 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:44:50.870399  381291 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:44:50.870438  381291 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:44:50.870449  381291 cache.go:58] Caching tarball of preloaded images
	I1018 09:44:50.870525  381291 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:44:50.870541  381291 preload.go:233] Found /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:44:50.870658  381291 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:44:50.870759  381291 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/config.json ...
	I1018 09:44:50.870787  381291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/config.json: {Name:mk20297a5c5ed1235f19ad5750426d4c2b3e1e56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:44:50.892160  381291 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:44:50.892181  381291 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:44:50.892197  381291 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:44:50.892225  381291 start.go:360] acquireMachinesLock for newest-cni-708733: {Name:mkb1aaee475623ac79c9cbc5f8d5e2dda34020d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:44:50.892333  381291 start.go:364] duration metric: took 85.321µs to acquireMachinesLock for "newest-cni-708733"
	I1018 09:44:50.892359  381291 start.go:93] Provisioning new machine with config: &{Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:44:50.892461  381291 start.go:125] createHost starting for "" (driver="docker")
	I1018 09:44:50.411644  373771 addons.go:514] duration metric: took 521.349898ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:44:50.690794  373771 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-055175" context rescaled to 1 replicas
	W1018 09:44:52.191073  373771 node_ready.go:57] node "embed-certs-055175" has "Ready":"False" status (will retry)
	I1018 09:44:48.854174  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:48.854598  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:48.854651  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:48.854706  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:48.885508  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:48.885532  353123 cri.go:89] found id: ""
	I1018 09:44:48.885540  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:44:48.885596  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:48.889991  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:48.890059  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:48.929152  353123 cri.go:89] found id: ""
	I1018 09:44:48.929181  353123 logs.go:282] 0 containers: []
	W1018 09:44:48.929190  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:48.929195  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:48.929243  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:48.957920  353123 cri.go:89] found id: ""
	I1018 09:44:48.957947  353123 logs.go:282] 0 containers: []
	W1018 09:44:48.957959  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:48.957968  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:48.958033  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:48.989162  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:48.989183  353123 cri.go:89] found id: ""
	I1018 09:44:48.989190  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:48.989251  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:48.993357  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:48.993430  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:49.021975  353123 cri.go:89] found id: ""
	I1018 09:44:49.022002  353123 logs.go:282] 0 containers: []
	W1018 09:44:49.022012  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:49.022020  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:49.022076  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:49.049353  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:49.049379  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:49.049384  353123 cri.go:89] found id: ""
	I1018 09:44:49.049394  353123 logs.go:282] 2 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7]
	I1018 09:44:49.049455  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:49.053550  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:49.057141  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:49.057204  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:49.083741  353123 cri.go:89] found id: ""
	I1018 09:44:49.083766  353123 logs.go:282] 0 containers: []
	W1018 09:44:49.083790  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:49.083798  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:49.083871  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:49.117190  353123 cri.go:89] found id: ""
	I1018 09:44:49.117218  353123 logs.go:282] 0 containers: []
	W1018 09:44:49.117239  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:49.117261  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:49.117279  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:49.166584  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:49.166619  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:49.227918  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:49.227941  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:44:49.227958  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:49.261790  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:44:49.261886  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:49.296838  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:49.296872  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:49.334167  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:49.334200  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:49.450870  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:49.450912  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:49.470257  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:49.470286  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:49.523546  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:44:49.523577  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:52.058905  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:52.060977  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:52.061040  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:52.061100  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:52.103424  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:52.103446  353123 cri.go:89] found id: ""
	I1018 09:44:52.103456  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:44:52.103527  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:52.108367  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:52.108434  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:52.136327  353123 cri.go:89] found id: ""
	I1018 09:44:52.136356  353123 logs.go:282] 0 containers: []
	W1018 09:44:52.136367  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:52.136375  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:52.136437  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:52.168011  353123 cri.go:89] found id: ""
	I1018 09:44:52.168038  353123 logs.go:282] 0 containers: []
	W1018 09:44:52.168049  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:52.168056  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:52.168122  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:52.198850  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:52.198872  353123 cri.go:89] found id: ""
	I1018 09:44:52.198881  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:52.198940  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:52.202937  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:52.203005  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:52.236765  353123 cri.go:89] found id: ""
	I1018 09:44:52.236795  353123 logs.go:282] 0 containers: []
	W1018 09:44:52.236807  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:52.236816  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:52.236915  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:52.268756  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:52.268788  353123 cri.go:89] found id: ""
	I1018 09:44:52.268800  353123 logs.go:282] 1 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:44:52.268892  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:52.273081  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:52.273159  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:52.301228  353123 cri.go:89] found id: ""
	I1018 09:44:52.301257  353123 logs.go:282] 0 containers: []
	W1018 09:44:52.301268  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:52.301276  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:52.301342  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:52.333783  353123 cri.go:89] found id: ""
	I1018 09:44:52.333834  353123 logs.go:282] 0 containers: []
	W1018 09:44:52.333846  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:52.333858  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:52.333875  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:52.383815  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:52.383877  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:52.422634  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:52.422664  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:52.533223  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:52.533265  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:52.552549  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:52.552581  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:52.626607  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:52.626631  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:44:52.626647  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:52.664502  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:52.664556  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:52.719127  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:44:52.719168  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:50.761803  381160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 09:44:50.762168  381160 start.go:159] libmachine.API.Create for "default-k8s-diff-port-942905" (driver="docker")
	I1018 09:44:50.762216  381160 client.go:168] LocalClient.Create starting
	I1018 09:44:50.762299  381160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem
	I1018 09:44:50.762346  381160 main.go:141] libmachine: Decoding PEM data...
	I1018 09:44:50.762373  381160 main.go:141] libmachine: Parsing certificate...
	I1018 09:44:50.762459  381160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem
	I1018 09:44:50.762491  381160 main.go:141] libmachine: Decoding PEM data...
	I1018 09:44:50.762517  381160 main.go:141] libmachine: Parsing certificate...
	I1018 09:44:50.763036  381160 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-942905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:44:50.785386  381160 cli_runner.go:211] docker network inspect default-k8s-diff-port-942905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:44:50.785469  381160 network_create.go:284] running [docker network inspect default-k8s-diff-port-942905] to gather additional debugging logs...
	I1018 09:44:50.785501  381160 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-942905
	W1018 09:44:50.803414  381160 cli_runner.go:211] docker network inspect default-k8s-diff-port-942905 returned with exit code 1
	I1018 09:44:50.803439  381160 network_create.go:287] error running [docker network inspect default-k8s-diff-port-942905]: docker network inspect default-k8s-diff-port-942905: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-942905 not found
	I1018 09:44:50.803452  381160 network_create.go:289] output of [docker network inspect default-k8s-diff-port-942905]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-942905 not found
	
	** /stderr **
	I1018 09:44:50.803568  381160 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:44:50.825218  381160 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-feaba2b58bf0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:21:45:2e:39:fd} reservation:<nil>}
	I1018 09:44:50.825817  381160 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-159189dc4cae IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:66:7d:df:93:8a:95} reservation:<nil>}
	I1018 09:44:50.826366  381160 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-34d26817ecdb IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:18:1e:90:91:d0} reservation:<nil>}
	I1018 09:44:50.826668  381160 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7d2dbeb8dc9f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:62:9b:70:ff:9e:fe} reservation:<nil>}
	I1018 09:44:50.827249  381160 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-de47eb429c53 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ea:6f:ec:e2:71:9d} reservation:<nil>}
	I1018 09:44:50.828084  381160 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d89850}
	I1018 09:44:50.828112  381160 network_create.go:124] attempt to create docker network default-k8s-diff-port-942905 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1018 09:44:50.828172  381160 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-942905 default-k8s-diff-port-942905
	I1018 09:44:50.891628  381160 network_create.go:108] docker network default-k8s-diff-port-942905 192.168.94.0/24 created
	I1018 09:44:50.891656  381160 kic.go:121] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-942905" container
	I1018 09:44:50.891716  381160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:44:50.911268  381160 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-942905 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-942905 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:44:50.932632  381160 oci.go:103] Successfully created a docker volume default-k8s-diff-port-942905
	I1018 09:44:50.932772  381160 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-942905-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-942905 --entrypoint /usr/bin/test -v default-k8s-diff-port-942905:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:44:51.344903  381160 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-942905
	I1018 09:44:51.344955  381160 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:44:51.344981  381160 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 09:44:51.345068  381160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-942905:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 09:44:50.894093  381291 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 09:44:50.894331  381291 start.go:159] libmachine.API.Create for "newest-cni-708733" (driver="docker")
	I1018 09:44:50.894364  381291 client.go:168] LocalClient.Create starting
	I1018 09:44:50.894422  381291 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem
	I1018 09:44:50.894460  381291 main.go:141] libmachine: Decoding PEM data...
	I1018 09:44:50.894476  381291 main.go:141] libmachine: Parsing certificate...
	I1018 09:44:50.894553  381291 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem
	I1018 09:44:50.894584  381291 main.go:141] libmachine: Decoding PEM data...
	I1018 09:44:50.894602  381291 main.go:141] libmachine: Parsing certificate...
	I1018 09:44:50.895030  381291 cli_runner.go:164] Run: docker network inspect newest-cni-708733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:44:50.914868  381291 cli_runner.go:211] docker network inspect newest-cni-708733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:44:50.914941  381291 network_create.go:284] running [docker network inspect newest-cni-708733] to gather additional debugging logs...
	I1018 09:44:50.914967  381291 cli_runner.go:164] Run: docker network inspect newest-cni-708733
	W1018 09:44:50.933906  381291 cli_runner.go:211] docker network inspect newest-cni-708733 returned with exit code 1
	I1018 09:44:50.933948  381291 network_create.go:287] error running [docker network inspect newest-cni-708733]: docker network inspect newest-cni-708733: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-708733 not found
	I1018 09:44:50.933963  381291 network_create.go:289] output of [docker network inspect newest-cni-708733]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-708733 not found
	
	** /stderr **
	I1018 09:44:50.934151  381291 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:44:50.952353  381291 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-feaba2b58bf0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:21:45:2e:39:fd} reservation:<nil>}
	I1018 09:44:50.953026  381291 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-159189dc4cae IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:66:7d:df:93:8a:95} reservation:<nil>}
	I1018 09:44:50.953604  381291 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-34d26817ecdb IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:18:1e:90:91:d0} reservation:<nil>}
	I1018 09:44:50.953950  381291 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7d2dbeb8dc9f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:62:9b:70:ff:9e:fe} reservation:<nil>}
	I1018 09:44:50.954528  381291 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-de47eb429c53 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ea:6f:ec:e2:71:9d} reservation:<nil>}
	I1018 09:44:50.955055  381291 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-0fd78e2b1cc4 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:4a:53:cb:95:ba:9d} reservation:<nil>}
	I1018 09:44:50.955759  381291 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e76aa0}
	I1018 09:44:50.955786  381291 network_create.go:124] attempt to create docker network newest-cni-708733 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1018 09:44:50.955871  381291 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-708733 newest-cni-708733
	I1018 09:44:51.020118  381291 network_create.go:108] docker network newest-cni-708733 192.168.103.0/24 created
	I1018 09:44:51.020149  381291 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-708733" container
	I1018 09:44:51.020201  381291 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:44:51.038937  381291 cli_runner.go:164] Run: docker volume create newest-cni-708733 --label name.minikube.sigs.k8s.io=newest-cni-708733 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:44:51.059729  381291 oci.go:103] Successfully created a docker volume newest-cni-708733
	I1018 09:44:51.059811  381291 cli_runner.go:164] Run: docker run --rm --name newest-cni-708733-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-708733 --entrypoint /usr/bin/test -v newest-cni-708733:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:44:51.480607  381291 oci.go:107] Successfully prepared a docker volume newest-cni-708733
	I1018 09:44:51.480663  381291 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:44:51.480688  381291 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 09:44:51.480777  381291 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-708733:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 09:44:54.739779  373771 node_ready.go:57] node "embed-certs-055175" has "Ready":"False" status (will retry)
	W1018 09:44:56.744791  373771 node_ready.go:57] node "embed-certs-055175" has "Ready":"False" status (will retry)
	I1018 09:44:55.251736  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:55.252176  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:55.252232  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:55.252291  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:55.279770  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:55.279808  353123 cri.go:89] found id: ""
	I1018 09:44:55.279831  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:44:55.279888  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:55.283764  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:55.283877  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:55.310174  353123 cri.go:89] found id: ""
	I1018 09:44:55.310200  353123 logs.go:282] 0 containers: []
	W1018 09:44:55.310212  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:55.310220  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:55.310283  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:55.336491  353123 cri.go:89] found id: ""
	I1018 09:44:55.336516  353123 logs.go:282] 0 containers: []
	W1018 09:44:55.336524  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:55.336530  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:55.336594  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:55.362990  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:55.363016  353123 cri.go:89] found id: ""
	I1018 09:44:55.363026  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:55.363093  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:55.367531  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:55.367608  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:55.393317  353123 cri.go:89] found id: ""
	I1018 09:44:55.393339  353123 logs.go:282] 0 containers: []
	W1018 09:44:55.393347  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:55.393353  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:55.393400  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:55.420073  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:55.420093  353123 cri.go:89] found id: ""
	I1018 09:44:55.420101  353123 logs.go:282] 1 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:44:55.420158  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:55.424059  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:55.424114  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:55.451671  353123 cri.go:89] found id: ""
	I1018 09:44:55.451695  353123 logs.go:282] 0 containers: []
	W1018 09:44:55.451702  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:55.451709  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:55.451755  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:55.478444  353123 cri.go:89] found id: ""
	I1018 09:44:55.478469  353123 logs.go:282] 0 containers: []
	W1018 09:44:55.478477  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:55.478486  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:44:55.478500  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:55.505264  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:55.505291  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:55.551185  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:55.551218  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:55.581868  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:55.581894  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:55.671081  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:55.671117  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:55.690572  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:55.690612  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:55.750418  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:55.750437  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:44:55.750450  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:55.781300  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:55.781331  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:58.332568  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:58.333057  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:58.333116  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:58.333175  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:58.367383  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:58.367411  353123 cri.go:89] found id: ""
	I1018 09:44:58.367421  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:44:58.367477  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:58.372128  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:58.372310  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:58.401796  353123 cri.go:89] found id: ""
	I1018 09:44:58.401853  353123 logs.go:282] 0 containers: []
	W1018 09:44:58.401866  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:58.401875  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:58.401941  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:58.433947  353123 cri.go:89] found id: ""
	I1018 09:44:58.433980  353123 logs.go:282] 0 containers: []
	W1018 09:44:58.433992  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:58.434000  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:58.434066  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:58.464332  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:58.464358  353123 cri.go:89] found id: ""
	I1018 09:44:58.464369  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:58.464434  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:58.468752  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:58.468855  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:58.501219  353123 cri.go:89] found id: ""
	I1018 09:44:58.501270  353123 logs.go:282] 0 containers: []
	W1018 09:44:58.501281  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:58.501289  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:58.501360  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:58.540335  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:58.540359  353123 cri.go:89] found id: ""
	I1018 09:44:58.540369  353123 logs.go:282] 1 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:44:58.540426  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:58.545307  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:58.545381  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:58.573432  353123 cri.go:89] found id: ""
	I1018 09:44:58.573462  353123 logs.go:282] 0 containers: []
	W1018 09:44:58.573471  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:58.573477  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:58.573522  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:58.604321  353123 cri.go:89] found id: ""
	I1018 09:44:58.604353  353123 logs.go:282] 0 containers: []
	W1018 09:44:58.604365  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:58.604379  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:58.604397  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:58.291368  381160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-942905:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (6.946241785s)
	I1018 09:44:58.291407  381160 kic.go:203] duration metric: took 6.946420512s to extract preloaded images to volume ...
	W1018 09:44:58.291494  381160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 09:44:58.291543  381160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 09:44:58.291587  381160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:44:58.358186  381160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-942905 --name default-k8s-diff-port-942905 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-942905 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-942905 --network default-k8s-diff-port-942905 --ip 192.168.94.2 --volume default-k8s-diff-port-942905:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:44:58.668690  381160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Running}}
	I1018 09:44:58.693054  381160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Status}}
	I1018 09:44:58.712905  381160 cli_runner.go:164] Run: docker exec default-k8s-diff-port-942905 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:44:58.759488  381160 oci.go:144] the created container "default-k8s-diff-port-942905" has a running status.
	I1018 09:44:58.759536  381160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa...
	I1018 09:44:59.120033  381160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:44:59.153002  381160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Status}}
	I1018 09:44:59.179797  381160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:44:59.179835  381160 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-942905 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:44:59.227960  381160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Status}}
	I1018 09:44:59.251706  381160 machine.go:93] provisionDockerMachine start ...
	I1018 09:44:59.251812  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:44:59.274634  381160 main.go:141] libmachine: Using SSH client type: native
	I1018 09:44:59.275009  381160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33206 <nil> <nil>}
	I1018 09:44:59.275029  381160 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:44:59.417050  381160 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-942905
	
	I1018 09:44:59.417084  381160 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-942905"
	I1018 09:44:59.417150  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:44:59.438561  381160 main.go:141] libmachine: Using SSH client type: native
	I1018 09:44:59.438955  381160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33206 <nil> <nil>}
	I1018 09:44:59.438980  381160 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-942905 && echo "default-k8s-diff-port-942905" | sudo tee /etc/hostname
	I1018 09:44:59.590383  381160 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-942905
	
	I1018 09:44:59.590489  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:44:59.608734  381160 main.go:141] libmachine: Using SSH client type: native
	I1018 09:44:59.609014  381160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33206 <nil> <nil>}
	I1018 09:44:59.609045  381160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-942905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-942905/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-942905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:44:59.744586  381160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:44:59.744640  381160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:44:59.744670  381160 ubuntu.go:190] setting up certificates
	I1018 09:44:59.744685  381160 provision.go:84] configureAuth start
	I1018 09:44:59.744747  381160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-942905
	I1018 09:44:59.762856  381160 provision.go:143] copyHostCerts
	I1018 09:44:59.762936  381160 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:44:59.762949  381160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:44:59.763041  381160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:44:59.763192  381160 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:44:59.763209  381160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:44:59.763254  381160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:44:59.763365  381160 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:44:59.763380  381160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:44:59.763421  381160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:44:59.763522  381160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-942905 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-942905 localhost minikube]
	I1018 09:45:00.359137  381160 provision.go:177] copyRemoteCerts
	I1018 09:45:00.359208  381160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:45:00.359255  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:45:00.376601  381160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:45:00.471629  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:45:00.490954  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 09:45:00.508779  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:45:00.527720  381160 provision.go:87] duration metric: took 783.019645ms to configureAuth
	I1018 09:45:00.527744  381160 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:45:00.527928  381160 config.go:182] Loaded profile config "default-k8s-diff-port-942905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:00.528036  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:45:00.545927  381160 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:00.546200  381160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33206 <nil> <nil>}
	I1018 09:45:00.546218  381160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:44:58.291901  381291 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-708733:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (6.811079626s)
	I1018 09:44:58.291950  381291 kic.go:203] duration metric: took 6.811257788s to extract preloaded images to volume ...
	W1018 09:44:58.292045  381291 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 09:44:58.292087  381291 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 09:44:58.292133  381291 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:44:58.358184  381291 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-708733 --name newest-cni-708733 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-708733 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-708733 --network newest-cni-708733 --ip 192.168.103.2 --volume newest-cni-708733:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:44:58.789904  381291 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Running}}
	I1018 09:44:58.810672  381291 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:44:58.840588  381291 cli_runner.go:164] Run: docker exec newest-cni-708733 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:44:58.892619  381291 oci.go:144] the created container "newest-cni-708733" has a running status.
	I1018 09:44:58.892654  381291 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa...
	I1018 09:44:59.437020  381291 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:44:59.464248  381291 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:44:59.484885  381291 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:44:59.484909  381291 kic_runner.go:114] Args: [docker exec --privileged newest-cni-708733 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:44:59.531443  381291 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:44:59.551011  381291 machine.go:93] provisionDockerMachine start ...
	I1018 09:44:59.551106  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:44:59.567782  381291 main.go:141] libmachine: Using SSH client type: native
	I1018 09:44:59.568081  381291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1018 09:44:59.568096  381291 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:44:59.701673  381291 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-708733
	
	I1018 09:44:59.701700  381291 ubuntu.go:182] provisioning hostname "newest-cni-708733"
	I1018 09:44:59.701758  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:44:59.719388  381291 main.go:141] libmachine: Using SSH client type: native
	I1018 09:44:59.719681  381291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1018 09:44:59.719704  381291 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-708733 && echo "newest-cni-708733" | sudo tee /etc/hostname
	I1018 09:44:59.870706  381291 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-708733
	
	I1018 09:44:59.870801  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:44:59.890531  381291 main.go:141] libmachine: Using SSH client type: native
	I1018 09:44:59.890745  381291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1018 09:44:59.890763  381291 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-708733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-708733/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-708733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:45:00.024744  381291 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:45:00.024774  381291 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:45:00.024807  381291 ubuntu.go:190] setting up certificates
	I1018 09:45:00.024842  381291 provision.go:84] configureAuth start
	I1018 09:45:00.024902  381291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-708733
	I1018 09:45:00.043035  381291 provision.go:143] copyHostCerts
	I1018 09:45:00.043103  381291 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:45:00.043116  381291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:45:00.043168  381291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:45:00.043275  381291 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:45:00.043285  381291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:45:00.043306  381291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:45:00.043371  381291 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:45:00.043378  381291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:45:00.043396  381291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:45:00.043444  381291 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.newest-cni-708733 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-708733]
	I1018 09:45:00.327989  381291 provision.go:177] copyRemoteCerts
	I1018 09:45:00.328049  381291 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:45:00.328084  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:00.347868  381291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:00.444921  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:45:00.464010  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 09:45:00.482098  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:45:00.499378  381291 provision.go:87] duration metric: took 474.517909ms to configureAuth
	I1018 09:45:00.499406  381291 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:45:00.499605  381291 config.go:182] Loaded profile config "newest-cni-708733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:00.499725  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:00.519511  381291 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:00.519721  381291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1018 09:45:00.519737  381291 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:45:00.771966  381291 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:45:00.771998  381291 machine.go:96] duration metric: took 1.220960491s to provisionDockerMachine
	I1018 09:45:00.772012  381291 client.go:171] duration metric: took 9.877637415s to LocalClient.Create
	I1018 09:45:00.772034  381291 start.go:167] duration metric: took 9.87770527s to libmachine.API.Create "newest-cni-708733"
	I1018 09:45:00.772051  381291 start.go:293] postStartSetup for "newest-cni-708733" (driver="docker")
	I1018 09:45:00.772064  381291 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:45:00.772130  381291 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:45:00.772181  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:00.795971  381291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:00.898970  381291 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:45:00.902632  381291 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:45:00.902666  381291 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:45:00.902677  381291 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:45:00.902723  381291 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:45:00.902835  381291 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:45:00.902964  381291 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:45:00.910995  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:00.933080  381291 start.go:296] duration metric: took 161.017858ms for postStartSetup
	I1018 09:45:00.933429  381291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-708733
	I1018 09:45:00.952417  381291 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/config.json ...
	I1018 09:45:00.953438  381291 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:45:00.953481  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:00.972137  381291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:01.066959  381291 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:45:01.071412  381291 start.go:128] duration metric: took 10.178935412s to createHost
	I1018 09:45:01.071434  381291 start.go:83] releasing machines lock for "newest-cni-708733", held for 10.179088829s
	I1018 09:45:01.071491  381291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-708733
	I1018 09:45:01.088634  381291 ssh_runner.go:195] Run: cat /version.json
	I1018 09:45:01.088695  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:01.088695  381291 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:45:01.088786  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:01.112801  381291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:01.113918  381291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:01.276879  381291 ssh_runner.go:195] Run: systemctl --version
	I1018 09:45:01.283898  381291 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:45:01.328440  381291 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:45:01.333133  381291 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:45:01.333205  381291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:45:01.361175  381291 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:45:01.361202  381291 start.go:495] detecting cgroup driver to use...
	I1018 09:45:01.361231  381291 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:45:01.361272  381291 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:45:01.378234  381291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:45:01.391647  381291 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:45:01.391707  381291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:45:01.409914  381291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:45:01.429388  381291 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:45:01.526350  381291 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:45:01.630562  381291 docker.go:234] disabling docker service ...
	I1018 09:45:01.630633  381291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:45:01.653759  381291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:45:01.667643  381291 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:45:01.778059  381291 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:45:01.887041  381291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:45:01.901660  381291 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:45:01.918988  381291 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:45:01.919052  381291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.932977  381291 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:45:01.933047  381291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.943533  381291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.953542  381291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.963614  381291 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:45:01.972556  381291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.982181  381291 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.996679  381291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:02.007480  381291 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:45:02.015934  381291 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:45:02.024007  381291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:02.123050  381291 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:45:02.246923  381291 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:45:02.246995  381291 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:45:02.251619  381291 start.go:563] Will wait 60s for crictl version
	I1018 09:45:02.251683  381291 ssh_runner.go:195] Run: which crictl
	I1018 09:45:02.256150  381291 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:45:02.283457  381291 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:45:02.283534  381291 ssh_runner.go:195] Run: crio --version
	I1018 09:45:02.316271  381291 ssh_runner.go:195] Run: crio --version
	I1018 09:45:02.351268  381291 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:45:02.352768  381291 cli_runner.go:164] Run: docker network inspect newest-cni-708733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:45:02.370940  381291 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 09:45:02.376017  381291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:00.811078  381160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:45:00.811113  381160 machine.go:96] duration metric: took 1.559383872s to provisionDockerMachine
	I1018 09:45:00.811126  381160 client.go:171] duration metric: took 10.048900106s to LocalClient.Create
	I1018 09:45:00.811151  381160 start.go:167] duration metric: took 10.048987547s to libmachine.API.Create "default-k8s-diff-port-942905"
	I1018 09:45:00.811164  381160 start.go:293] postStartSetup for "default-k8s-diff-port-942905" (driver="docker")
	I1018 09:45:00.811178  381160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:45:00.811254  381160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:45:00.811299  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:45:00.830550  381160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:45:00.928438  381160 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:45:00.931979  381160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:45:00.932011  381160 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:45:00.932023  381160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:45:00.932073  381160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:45:00.932183  381160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:45:00.932322  381160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:45:00.940162  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:00.960740  381160 start.go:296] duration metric: took 149.561722ms for postStartSetup
	I1018 09:45:00.961086  381160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-942905
	I1018 09:45:00.979805  381160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/config.json ...
	I1018 09:45:00.980166  381160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:45:00.980207  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:45:00.997884  381160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:45:01.093448  381160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:45:01.104589  381160 start.go:128] duration metric: took 10.348071861s to createHost
	I1018 09:45:01.104621  381160 start.go:83] releasing machines lock for "default-k8s-diff-port-942905", held for 10.348219433s
	I1018 09:45:01.104710  381160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-942905
	I1018 09:45:01.127607  381160 ssh_runner.go:195] Run: cat /version.json
	I1018 09:45:01.127676  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:45:01.127704  381160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:45:01.127778  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:45:01.150611  381160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:45:01.154699  381160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:45:01.253321  381160 ssh_runner.go:195] Run: systemctl --version
	I1018 09:45:01.326956  381160 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:45:01.363983  381160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:45:01.368694  381160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:45:01.368747  381160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:45:01.397138  381160 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:45:01.397161  381160 start.go:495] detecting cgroup driver to use...
	I1018 09:45:01.397192  381160 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:45:01.397237  381160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:45:01.413222  381160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:45:01.426074  381160 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:45:01.426124  381160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:45:01.444782  381160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:45:01.468099  381160 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:45:01.562373  381160 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:45:01.672401  381160 docker.go:234] disabling docker service ...
	I1018 09:45:01.672469  381160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:45:01.694710  381160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:45:01.714252  381160 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:45:01.829193  381160 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:45:01.931303  381160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:45:01.946887  381160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:45:01.962333  381160 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:45:01.962397  381160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.973621  381160 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:45:01.973690  381160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.983444  381160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.992651  381160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:02.003641  381160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:45:02.013656  381160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:02.023401  381160 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:02.039670  381160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:02.049256  381160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:45:02.064093  381160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:45:02.073727  381160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:02.172619  381160 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:45:02.289299  381160 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:45:02.289388  381160 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:45:02.293813  381160 start.go:563] Will wait 60s for crictl version
	I1018 09:45:02.293900  381160 ssh_runner.go:195] Run: which crictl
	I1018 09:45:02.297617  381160 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:45:02.327297  381160 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:45:02.327375  381160 ssh_runner.go:195] Run: crio --version
	I1018 09:45:02.357910  381160 ssh_runner.go:195] Run: crio --version
	I1018 09:45:02.390061  381291 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 09:45:02.390874  381160 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1018 09:44:59.191088  373771 node_ready.go:57] node "embed-certs-055175" has "Ready":"False" status (will retry)
	I1018 09:45:01.191208  373771 node_ready.go:49] node "embed-certs-055175" is "Ready"
	I1018 09:45:01.191253  373771 node_ready.go:38] duration metric: took 11.00402594s for node "embed-certs-055175" to be "Ready" ...
	I1018 09:45:01.191272  373771 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:45:01.191356  373771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:45:01.205763  373771 api_server.go:72] duration metric: took 11.31550879s to wait for apiserver process to appear ...
	I1018 09:45:01.205805  373771 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:45:01.205851  373771 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:45:01.210749  373771 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:45:01.211633  373771 api_server.go:141] control plane version: v1.34.1
	I1018 09:45:01.211659  373771 api_server.go:131] duration metric: took 5.845331ms to wait for apiserver health ...
	I1018 09:45:01.211670  373771 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:45:01.215349  373771 system_pods.go:59] 8 kube-system pods found
	I1018 09:45:01.215380  373771 system_pods.go:61] "coredns-66bc5c9577-ksdf9" [ba2449a3-fc94-49e2-9e00-868003d349b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:45:01.215386  373771 system_pods.go:61] "etcd-embed-certs-055175" [acbafcab-4332-412c-9a28-07d9c4f5d5d1] Running
	I1018 09:45:01.215393  373771 system_pods.go:61] "kindnet-tntfx" [f7f70a88-1903-43e5-a76f-2206c4e3df79] Running
	I1018 09:45:01.215397  373771 system_pods.go:61] "kube-apiserver-embed-certs-055175" [106af05d-a1f9-4283-a87f-7ac003f31fc7] Running
	I1018 09:45:01.215405  373771 system_pods.go:61] "kube-controller-manager-embed-certs-055175" [c2698b20-d1d6-4737-a292-7eef1978c79b] Running
	I1018 09:45:01.215408  373771 system_pods.go:61] "kube-proxy-9n98q" [5c9c0f79-f699-4305-8423-c0863f443b78] Running
	I1018 09:45:01.215411  373771 system_pods.go:61] "kube-scheduler-embed-certs-055175" [7b537fc9-c879-44cc-95e6-35fb0dcc566a] Running
	I1018 09:45:01.215416  373771 system_pods.go:61] "storage-provisioner" [1d121276-430c-41af-a2b6-542d426c43dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:45:01.215426  373771 system_pods.go:74] duration metric: took 3.750342ms to wait for pod list to return data ...
	I1018 09:45:01.215436  373771 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:45:01.217968  373771 default_sa.go:45] found service account: "default"
	I1018 09:45:01.217991  373771 default_sa.go:55] duration metric: took 2.548354ms for default service account to be created ...
	I1018 09:45:01.218001  373771 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:45:01.220282  373771 system_pods.go:86] 8 kube-system pods found
	I1018 09:45:01.220312  373771 system_pods.go:89] "coredns-66bc5c9577-ksdf9" [ba2449a3-fc94-49e2-9e00-868003d349b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:45:01.220319  373771 system_pods.go:89] "etcd-embed-certs-055175" [acbafcab-4332-412c-9a28-07d9c4f5d5d1] Running
	I1018 09:45:01.220327  373771 system_pods.go:89] "kindnet-tntfx" [f7f70a88-1903-43e5-a76f-2206c4e3df79] Running
	I1018 09:45:01.220333  373771 system_pods.go:89] "kube-apiserver-embed-certs-055175" [106af05d-a1f9-4283-a87f-7ac003f31fc7] Running
	I1018 09:45:01.220340  373771 system_pods.go:89] "kube-controller-manager-embed-certs-055175" [c2698b20-d1d6-4737-a292-7eef1978c79b] Running
	I1018 09:45:01.220345  373771 system_pods.go:89] "kube-proxy-9n98q" [5c9c0f79-f699-4305-8423-c0863f443b78] Running
	I1018 09:45:01.220351  373771 system_pods.go:89] "kube-scheduler-embed-certs-055175" [7b537fc9-c879-44cc-95e6-35fb0dcc566a] Running
	I1018 09:45:01.220358  373771 system_pods.go:89] "storage-provisioner" [1d121276-430c-41af-a2b6-542d426c43dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:45:01.220400  373771 retry.go:31] will retry after 292.027072ms: missing components: kube-dns
	I1018 09:45:01.517165  373771 system_pods.go:86] 8 kube-system pods found
	I1018 09:45:01.517195  373771 system_pods.go:89] "coredns-66bc5c9577-ksdf9" [ba2449a3-fc94-49e2-9e00-868003d349b1] Running
	I1018 09:45:01.517200  373771 system_pods.go:89] "etcd-embed-certs-055175" [acbafcab-4332-412c-9a28-07d9c4f5d5d1] Running
	I1018 09:45:01.517203  373771 system_pods.go:89] "kindnet-tntfx" [f7f70a88-1903-43e5-a76f-2206c4e3df79] Running
	I1018 09:45:01.517208  373771 system_pods.go:89] "kube-apiserver-embed-certs-055175" [106af05d-a1f9-4283-a87f-7ac003f31fc7] Running
	I1018 09:45:01.517212  373771 system_pods.go:89] "kube-controller-manager-embed-certs-055175" [c2698b20-d1d6-4737-a292-7eef1978c79b] Running
	I1018 09:45:01.517215  373771 system_pods.go:89] "kube-proxy-9n98q" [5c9c0f79-f699-4305-8423-c0863f443b78] Running
	I1018 09:45:01.517218  373771 system_pods.go:89] "kube-scheduler-embed-certs-055175" [7b537fc9-c879-44cc-95e6-35fb0dcc566a] Running
	I1018 09:45:01.517221  373771 system_pods.go:89] "storage-provisioner" [1d121276-430c-41af-a2b6-542d426c43dc] Running
	I1018 09:45:01.517228  373771 system_pods.go:126] duration metric: took 299.220385ms to wait for k8s-apps to be running ...
	I1018 09:45:01.517235  373771 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:45:01.517278  373771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:45:01.530674  373771 system_svc.go:56] duration metric: took 13.426605ms WaitForService to wait for kubelet
	I1018 09:45:01.530709  373771 kubeadm.go:586] duration metric: took 11.640461228s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:45:01.530731  373771 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:45:01.534308  373771 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:45:01.534331  373771 node_conditions.go:123] node cpu capacity is 8
	I1018 09:45:01.534354  373771 node_conditions.go:105] duration metric: took 3.608017ms to run NodePressure ...
	I1018 09:45:01.534369  373771 start.go:241] waiting for startup goroutines ...
	I1018 09:45:01.534378  373771 start.go:246] waiting for cluster config update ...
	I1018 09:45:01.534387  373771 start.go:255] writing updated cluster config ...
	I1018 09:45:01.534640  373771 ssh_runner.go:195] Run: rm -f paused
	I1018 09:45:01.538546  373771 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:45:01.542230  373771 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ksdf9" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:01.547256  373771 pod_ready.go:94] pod "coredns-66bc5c9577-ksdf9" is "Ready"
	I1018 09:45:01.547284  373771 pod_ready.go:86] duration metric: took 5.031552ms for pod "coredns-66bc5c9577-ksdf9" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:01.549343  373771 pod_ready.go:83] waiting for pod "etcd-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:01.553589  373771 pod_ready.go:94] pod "etcd-embed-certs-055175" is "Ready"
	I1018 09:45:01.553619  373771 pod_ready.go:86] duration metric: took 4.251109ms for pod "etcd-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:01.555860  373771 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:01.560049  373771 pod_ready.go:94] pod "kube-apiserver-embed-certs-055175" is "Ready"
	I1018 09:45:01.560072  373771 pod_ready.go:86] duration metric: took 4.189026ms for pod "kube-apiserver-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:01.562507  373771 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:01.943358  373771 pod_ready.go:94] pod "kube-controller-manager-embed-certs-055175" is "Ready"
	I1018 09:45:01.943391  373771 pod_ready.go:86] duration metric: took 380.861522ms for pod "kube-controller-manager-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:02.143523  373771 pod_ready.go:83] waiting for pod "kube-proxy-9n98q" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:02.542665  373771 pod_ready.go:94] pod "kube-proxy-9n98q" is "Ready"
	I1018 09:45:02.542690  373771 pod_ready.go:86] duration metric: took 399.136576ms for pod "kube-proxy-9n98q" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:02.743469  373771 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:03.143247  373771 pod_ready.go:94] pod "kube-scheduler-embed-certs-055175" is "Ready"
	I1018 09:45:03.143279  373771 pod_ready.go:86] duration metric: took 399.784483ms for pod "kube-scheduler-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:03.143292  373771 pod_ready.go:40] duration metric: took 1.604710305s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:45:03.189033  373771 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:45:03.191585  373771 out.go:179] * Done! kubectl is now configured to use "embed-certs-055175" cluster and "default" namespace by default
	I1018 09:44:58.655042  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:44:58.655074  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:58.688346  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:58.688380  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:58.750658  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:58.750699  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:58.785634  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:58.785664  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:58.933097  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:58.933133  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:58.959738  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:58.959770  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:59.060360  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:59.060387  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:44:59.060404  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:01.607920  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:45:01.608518  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:45:01.608589  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:45:01.608650  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:45:01.644374  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:01.644398  353123 cri.go:89] found id: ""
	I1018 09:45:01.644410  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:45:01.644472  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:01.649392  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:45:01.649465  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:45:01.678954  353123 cri.go:89] found id: ""
	I1018 09:45:01.678983  353123 logs.go:282] 0 containers: []
	W1018 09:45:01.678994  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:45:01.679005  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:45:01.679068  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:45:01.715079  353123 cri.go:89] found id: ""
	I1018 09:45:01.715110  353123 logs.go:282] 0 containers: []
	W1018 09:45:01.715121  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:45:01.715129  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:45:01.715191  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:45:01.743578  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:01.743613  353123 cri.go:89] found id: ""
	I1018 09:45:01.743624  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:45:01.743685  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:01.749121  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:45:01.749204  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:45:01.781635  353123 cri.go:89] found id: ""
	I1018 09:45:01.781663  353123 logs.go:282] 0 containers: []
	W1018 09:45:01.781673  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:45:01.781681  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:45:01.781748  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:45:01.811864  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:01.811891  353123 cri.go:89] found id: ""
	I1018 09:45:01.811903  353123 logs.go:282] 1 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:45:01.811969  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:01.819899  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:45:01.820044  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:45:01.851980  353123 cri.go:89] found id: ""
	I1018 09:45:01.852008  353123 logs.go:282] 0 containers: []
	W1018 09:45:01.852023  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:45:01.852031  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:45:01.852100  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:45:01.889801  353123 cri.go:89] found id: ""
	I1018 09:45:01.889843  353123 logs.go:282] 0 containers: []
	W1018 09:45:01.889857  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:45:01.889868  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:01.889883  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:01.920012  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:01.920038  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:45:01.973711  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:45:01.973741  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:45:02.007947  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:02.007976  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:02.112317  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:45:02.112352  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:45:02.132749  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:45:02.132780  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:45:02.204788  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:45:02.204809  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:45:02.204841  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:02.243306  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:02.243346  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:02.392601  381160 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-942905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:45:02.411071  381160 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 09:45:02.415686  381160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:02.428058  381160 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-942905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-942905 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:45:02.428202  381160 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:45:02.428261  381160 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:02.462671  381160 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:02.462692  381160 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:45:02.462737  381160 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:02.491340  381160 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:02.491365  381160 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:45:02.491373  381160 kubeadm.go:934] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1018 09:45:02.491453  381160 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-942905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-942905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:45:02.491513  381160 ssh_runner.go:195] Run: crio config
	I1018 09:45:02.539294  381160 cni.go:84] Creating CNI manager for ""
	I1018 09:45:02.539317  381160 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:45:02.539340  381160 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:45:02.539361  381160 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-942905 NodeName:default-k8s-diff-port-942905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:45:02.539485  381160 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-942905"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:45:02.539544  381160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:45:02.548244  381160 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:45:02.548311  381160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:45:02.556665  381160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 09:45:02.569912  381160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:45:02.585539  381160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1018 09:45:02.599103  381160 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:45:02.603152  381160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:02.616312  381160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:02.702091  381160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:45:02.733016  381160 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905 for IP: 192.168.94.2
	I1018 09:45:02.733040  381160 certs.go:195] generating shared ca certs ...
	I1018 09:45:02.733060  381160 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:02.733237  381160 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:45:02.733279  381160 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:45:02.733289  381160 certs.go:257] generating profile certs ...
	I1018 09:45:02.733342  381160 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/client.key
	I1018 09:45:02.733362  381160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/client.crt with IP's: []
	I1018 09:45:03.027373  381160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/client.crt ...
	I1018 09:45:03.027397  381160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/client.crt: {Name:mk981af9917b6ac92974b225166ec0395d71372f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.027562  381160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/client.key ...
	I1018 09:45:03.027582  381160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/client.key: {Name:mkd2ccf0788c296cb00266f87e9a3f936c6bb097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.027707  381160 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.key.cb5a57ca
	I1018 09:45:03.027732  381160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.crt.cb5a57ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1018 09:45:03.455977  381160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.crt.cb5a57ca ...
	I1018 09:45:03.456007  381160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.crt.cb5a57ca: {Name:mk2889a394c4a49479ba0dac8a102927df330339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.456154  381160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.key.cb5a57ca ...
	I1018 09:45:03.456166  381160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.key.cb5a57ca: {Name:mk5a06293ffa6e89403afb34f76f87cc2a90226d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.456241  381160 certs.go:382] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.crt.cb5a57ca -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.crt
	I1018 09:45:03.456326  381160 certs.go:386] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.key.cb5a57ca -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.key
	I1018 09:45:03.456393  381160 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.key
	I1018 09:45:03.456410  381160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.crt with IP's: []
	I1018 09:45:03.745412  381160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.crt ...
	I1018 09:45:03.745442  381160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.crt: {Name:mk3e7ea9bc969efb2a6fa264abfdc7649bac7488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.745615  381160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.key ...
	I1018 09:45:03.745629  381160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.key: {Name:mk9070cc1f6e0ec8f11fe644828ed9f3eab55e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.745795  381160 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:45:03.745853  381160 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:45:03.745864  381160 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:45:03.745885  381160 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:45:03.745906  381160 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:45:03.745927  381160 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:45:03.745965  381160 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:03.746557  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:45:03.766750  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:45:03.785188  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:45:03.803488  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:45:03.822158  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 09:45:03.841444  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:45:03.861945  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:45:03.880231  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:45:03.899112  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:45:03.919492  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:45:03.938037  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:45:03.956161  381160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:45:03.969096  381160 ssh_runner.go:195] Run: openssl version
	I1018 09:45:03.976951  381160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:45:03.985944  381160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:03.990631  381160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:03.990698  381160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:04.030208  381160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:45:04.039727  381160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:45:04.049165  381160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:45:04.053153  381160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:45:04.053225  381160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:45:04.094573  381160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:45:04.105540  381160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:45:04.115460  381160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:45:04.119424  381160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:45:04.119486  381160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:45:04.154663  381160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:45:04.163890  381160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:45:04.167557  381160 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:45:04.167618  381160 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-942905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-942905 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:45:04.167716  381160 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:45:04.167770  381160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:45:04.196236  381160 cri.go:89] found id: ""
	I1018 09:45:04.196321  381160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:45:04.205172  381160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:45:04.213562  381160 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:45:04.213646  381160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:45:04.221974  381160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:45:04.221990  381160 kubeadm.go:157] found existing configuration files:
	
	I1018 09:45:04.222039  381160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1018 09:45:04.229745  381160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:45:04.229812  381160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:45:04.238298  381160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1018 09:45:04.246658  381160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:45:04.246716  381160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:45:04.255972  381160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1018 09:45:04.264031  381160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:45:04.264087  381160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:45:04.272422  381160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1018 09:45:04.280843  381160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:45:04.280903  381160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:45:04.288575  381160 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:45:04.360700  381160 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:45:04.428420  381160 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:45:02.391762  381291 kubeadm.go:883] updating cluster {Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:45:02.391993  381291 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:45:02.392084  381291 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:02.425187  381291 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:02.425210  381291 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:45:02.425255  381291 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:02.454521  381291 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:02.454553  381291 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:45:02.454563  381291 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 09:45:02.454690  381291 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-708733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:45:02.454778  381291 ssh_runner.go:195] Run: crio config
	I1018 09:45:02.503782  381291 cni.go:84] Creating CNI manager for ""
	I1018 09:45:02.503810  381291 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:45:02.503856  381291 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 09:45:02.503896  381291 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-708733 NodeName:newest-cni-708733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:45:02.504052  381291 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-708733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:45:02.504122  381291 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:45:02.513289  381291 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:45:02.513358  381291 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:45:02.521575  381291 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:45:02.534678  381291 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:45:02.552132  381291 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:45:02.565736  381291 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:45:02.569535  381291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:02.579949  381291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:02.671930  381291 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:45:02.696682  381291 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733 for IP: 192.168.103.2
	I1018 09:45:02.696707  381291 certs.go:195] generating shared ca certs ...
	I1018 09:45:02.696739  381291 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:02.696961  381291 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:45:02.697030  381291 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:45:02.697046  381291 certs.go:257] generating profile certs ...
	I1018 09:45:02.697127  381291 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/client.key
	I1018 09:45:02.697158  381291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/client.crt with IP's: []
	I1018 09:45:03.021940  381291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/client.crt ...
	I1018 09:45:03.021971  381291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/client.crt: {Name:mk34305844f07bbce4828aa11fbd8babaff65d58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.022156  381291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/client.key ...
	I1018 09:45:03.022167  381291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/client.key: {Name:mk4f3f93ab07dd49c2ff8ec3a1448251b4cac3b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.022246  381291 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key.ffa152cd
	I1018 09:45:03.022263  381291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.crt.ffa152cd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1018 09:45:03.175129  381291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.crt.ffa152cd ...
	I1018 09:45:03.175158  381291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.crt.ffa152cd: {Name:mkc25ea49b370be29f02b5a8660805e0ac00d4c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.175332  381291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key.ffa152cd ...
	I1018 09:45:03.175346  381291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key.ffa152cd: {Name:mkf39e3929d7202cc4a55decf0767b42ac2055df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.175418  381291 certs.go:382] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.crt.ffa152cd -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.crt
	I1018 09:45:03.175509  381291 certs.go:386] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key.ffa152cd -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key
	I1018 09:45:03.175572  381291 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.key
	I1018 09:45:03.175596  381291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.crt with IP's: []
	I1018 09:45:03.410225  381291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.crt ...
	I1018 09:45:03.410260  381291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.crt: {Name:mk912f810ad1a80c75b05b8385bdc60578025312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.410467  381291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.key ...
	I1018 09:45:03.410486  381291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.key: {Name:mkb65103205eaab03d8160e628125e95f2c1c9cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.410723  381291 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:45:03.410761  381291 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:45:03.410772  381291 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:45:03.410800  381291 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:45:03.410837  381291 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:45:03.410871  381291 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:45:03.410920  381291 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:03.411482  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:45:03.431424  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:45:03.449495  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:45:03.467299  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:45:03.485023  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:45:03.502535  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:45:03.520275  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:45:03.538841  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:45:03.556743  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:45:03.576835  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:45:03.594737  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:45:03.612576  381291 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:45:03.625282  381291 ssh_runner.go:195] Run: openssl version
	I1018 09:45:03.631505  381291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:45:03.641420  381291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:03.646811  381291 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:03.646891  381291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:03.695322  381291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:45:03.704814  381291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:45:03.713988  381291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:45:03.718050  381291 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:45:03.718128  381291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:45:03.753040  381291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:45:03.762142  381291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:45:03.771213  381291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:45:03.775498  381291 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:45:03.775558  381291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:45:03.812378  381291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:45:03.821406  381291 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:45:03.825731  381291 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:45:03.825796  381291 kubeadm.go:400] StartCluster: {Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:45:03.825918  381291 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:45:03.825995  381291 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:45:03.857069  381291 cri.go:89] found id: ""
	I1018 09:45:03.857136  381291 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:45:03.865207  381291 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:45:03.872979  381291 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:45:03.873033  381291 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:45:03.881090  381291 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:45:03.881108  381291 kubeadm.go:157] found existing configuration files:
	
	I1018 09:45:03.881154  381291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:45:03.889156  381291 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:45:03.889220  381291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:45:03.897045  381291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:45:03.905216  381291 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:45:03.905300  381291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:45:03.912819  381291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:45:03.920656  381291 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:45:03.920714  381291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:45:03.928538  381291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:45:03.936379  381291 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:45:03.936437  381291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:45:03.944237  381291 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:45:04.009498  381291 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:45:04.083257  381291 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:45:04.801345  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:45:04.801714  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:45:04.801769  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:45:04.801851  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:45:04.829837  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:04.829861  353123 cri.go:89] found id: ""
	I1018 09:45:04.829878  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:45:04.829947  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:04.834147  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:45:04.834225  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:45:04.866567  353123 cri.go:89] found id: ""
	I1018 09:45:04.866602  353123 logs.go:282] 0 containers: []
	W1018 09:45:04.866613  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:45:04.866620  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:45:04.866680  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:45:04.897462  353123 cri.go:89] found id: ""
	I1018 09:45:04.897493  353123 logs.go:282] 0 containers: []
	W1018 09:45:04.897505  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:45:04.897513  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:45:04.897579  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:45:04.929017  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:04.929042  353123 cri.go:89] found id: ""
	I1018 09:45:04.929052  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:45:04.929113  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:04.933633  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:45:04.933703  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:45:04.962539  353123 cri.go:89] found id: ""
	I1018 09:45:04.962572  353123 logs.go:282] 0 containers: []
	W1018 09:45:04.962583  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:45:04.962590  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:45:04.962645  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:45:04.993181  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:04.993205  353123 cri.go:89] found id: ""
	I1018 09:45:04.993214  353123 logs.go:282] 1 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:45:04.993272  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:04.997428  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:45:04.997550  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:45:05.028000  353123 cri.go:89] found id: ""
	I1018 09:45:05.028029  353123 logs.go:282] 0 containers: []
	W1018 09:45:05.028041  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:45:05.028049  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:45:05.028104  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:45:05.055921  353123 cri.go:89] found id: ""
	I1018 09:45:05.055951  353123 logs.go:282] 0 containers: []
	W1018 09:45:05.055962  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:45:05.055974  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:05.055988  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:45:05.102239  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:45:05.102275  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:45:05.134806  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:05.134860  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:05.248706  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:45:05.248744  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:45:05.268497  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:45:05.268527  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:45:05.334870  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:45:05.334893  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:45:05.334912  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:05.367588  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:05.367621  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:05.428601  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:05.428641  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:07.959155  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:45:07.959656  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:45:07.959714  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:45:07.959770  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:45:07.987228  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:07.987248  353123 cri.go:89] found id: ""
	I1018 09:45:07.987256  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:45:07.987311  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:07.991349  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:45:07.991416  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:45:08.019893  353123 cri.go:89] found id: ""
	I1018 09:45:08.019922  353123 logs.go:282] 0 containers: []
	W1018 09:45:08.019932  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:45:08.019950  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:45:08.020007  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:45:08.050180  353123 cri.go:89] found id: ""
	I1018 09:45:08.050208  353123 logs.go:282] 0 containers: []
	W1018 09:45:08.050220  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:45:08.050229  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:45:08.050295  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:45:08.089285  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:08.089310  353123 cri.go:89] found id: ""
	I1018 09:45:08.089321  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:45:08.089389  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:08.093682  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:45:08.093751  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:45:08.123444  353123 cri.go:89] found id: ""
	I1018 09:45:08.123472  353123 logs.go:282] 0 containers: []
	W1018 09:45:08.123484  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:45:08.123503  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:45:08.123649  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:45:08.153159  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:08.153189  353123 cri.go:89] found id: ""
	I1018 09:45:08.153200  353123 logs.go:282] 1 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:45:08.153263  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:08.157466  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:45:08.157556  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:45:08.189498  353123 cri.go:89] found id: ""
	I1018 09:45:08.189531  353123 logs.go:282] 0 containers: []
	W1018 09:45:08.189542  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:45:08.189554  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:45:08.189639  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:45:08.220603  353123 cri.go:89] found id: ""
	I1018 09:45:08.220634  353123 logs.go:282] 0 containers: []
	W1018 09:45:08.220646  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:45:08.220657  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:45:08.220670  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:45:08.262810  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:08.262863  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:08.369896  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:45:08.369934  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:45:08.390711  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:45:08.390742  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:45:08.448636  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:45:08.448666  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:45:08.448680  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:08.482980  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:08.483011  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:08.532288  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:08.532326  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:08.560227  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:08.560254  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	
	
	==> CRI-O <==
	Oct 18 09:45:01 embed-certs-055175 crio[780]: time="2025-10-18T09:45:01.161232664Z" level=info msg="Starting container: f736f894e2029adb1b4a8e91a72c483640a0eadaabe0c824671fac61402266c2" id=a3bed410-e0bb-4046-8217-02abea094933 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:45:01 embed-certs-055175 crio[780]: time="2025-10-18T09:45:01.163725233Z" level=info msg="Started container" PID=1870 containerID=f736f894e2029adb1b4a8e91a72c483640a0eadaabe0c824671fac61402266c2 description=kube-system/coredns-66bc5c9577-ksdf9/coredns id=a3bed410-e0bb-4046-8217-02abea094933 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6a6f5a7abb36ad5f5bf473d0640302a1ac5ae0e5c847abbd6b056c71b060b3c
	Oct 18 09:45:03 embed-certs-055175 crio[780]: time="2025-10-18T09:45:03.640446524Z" level=info msg="Running pod sandbox: default/busybox/POD" id=240b3ef1-f7eb-4f41-81f3-356ec3f04d94 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:45:03 embed-certs-055175 crio[780]: time="2025-10-18T09:45:03.640522973Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:03 embed-certs-055175 crio[780]: time="2025-10-18T09:45:03.647752185Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0b371519fd787557c89b5ed16dc136f26b674a107b22b72f7effb16f0a9184b3 UID:cbc79bc0-bf43-48ca-a6bc-937aa2d7fc9c NetNS:/var/run/netns/fd2e5768-2a0e-4180-a183-7fde3f96eadb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001288d0}] Aliases:map[]}"
	Oct 18 09:45:03 embed-certs-055175 crio[780]: time="2025-10-18T09:45:03.64792978Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:45:03 embed-certs-055175 crio[780]: time="2025-10-18T09:45:03.660063023Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0b371519fd787557c89b5ed16dc136f26b674a107b22b72f7effb16f0a9184b3 UID:cbc79bc0-bf43-48ca-a6bc-937aa2d7fc9c NetNS:/var/run/netns/fd2e5768-2a0e-4180-a183-7fde3f96eadb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001288d0}] Aliases:map[]}"
	Oct 18 09:45:03 embed-certs-055175 crio[780]: time="2025-10-18T09:45:03.660243193Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 09:45:03 embed-certs-055175 crio[780]: time="2025-10-18T09:45:03.661289437Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:45:03 embed-certs-055175 crio[780]: time="2025-10-18T09:45:03.662435002Z" level=info msg="Ran pod sandbox 0b371519fd787557c89b5ed16dc136f26b674a107b22b72f7effb16f0a9184b3 with infra container: default/busybox/POD" id=240b3ef1-f7eb-4f41-81f3-356ec3f04d94 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:45:03 embed-certs-055175 crio[780]: time="2025-10-18T09:45:03.663883329Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e4587d51-9684-45f5-885c-719b92524b60 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:03 embed-certs-055175 crio[780]: time="2025-10-18T09:45:03.664032356Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e4587d51-9684-45f5-885c-719b92524b60 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:03 embed-certs-055175 crio[780]: time="2025-10-18T09:45:03.664082226Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e4587d51-9684-45f5-885c-719b92524b60 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:03 embed-certs-055175 crio[780]: time="2025-10-18T09:45:03.664924878Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5500c26c-abbd-41b6-92a1-dc7934b8048b name=/runtime.v1.ImageService/PullImage
	Oct 18 09:45:03 embed-certs-055175 crio[780]: time="2025-10-18T09:45:03.668431264Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 09:45:05 embed-certs-055175 crio[780]: time="2025-10-18T09:45:05.764756945Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=5500c26c-abbd-41b6-92a1-dc7934b8048b name=/runtime.v1.ImageService/PullImage
	Oct 18 09:45:05 embed-certs-055175 crio[780]: time="2025-10-18T09:45:05.765590735Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4f84c09e-9a72-4d9f-aa10-96d3999c3c51 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:05 embed-certs-055175 crio[780]: time="2025-10-18T09:45:05.767036965Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=04e36f0b-11f1-4379-90a2-a71f46727ef4 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:05 embed-certs-055175 crio[780]: time="2025-10-18T09:45:05.770409847Z" level=info msg="Creating container: default/busybox/busybox" id=7fbe799f-1333-475d-9571-5e5e3deec8c7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:45:05 embed-certs-055175 crio[780]: time="2025-10-18T09:45:05.771304053Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:05 embed-certs-055175 crio[780]: time="2025-10-18T09:45:05.775720809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:05 embed-certs-055175 crio[780]: time="2025-10-18T09:45:05.776293373Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:05 embed-certs-055175 crio[780]: time="2025-10-18T09:45:05.810467038Z" level=info msg="Created container 3bbc514907fe95b134a4dd9b8cc4a8d85c46c852d83b87913637299a659785df: default/busybox/busybox" id=7fbe799f-1333-475d-9571-5e5e3deec8c7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:45:05 embed-certs-055175 crio[780]: time="2025-10-18T09:45:05.811177276Z" level=info msg="Starting container: 3bbc514907fe95b134a4dd9b8cc4a8d85c46c852d83b87913637299a659785df" id=2780d5c4-a7dc-4945-b07c-220f84f7a951 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:45:05 embed-certs-055175 crio[780]: time="2025-10-18T09:45:05.813127725Z" level=info msg="Started container" PID=1946 containerID=3bbc514907fe95b134a4dd9b8cc4a8d85c46c852d83b87913637299a659785df description=default/busybox/busybox id=2780d5c4-a7dc-4945-b07c-220f84f7a951 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b371519fd787557c89b5ed16dc136f26b674a107b22b72f7effb16f0a9184b3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	3bbc514907fe9       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   0b371519fd787       busybox                                      default
	f736f894e2029       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   d6a6f5a7abb36       coredns-66bc5c9577-ksdf9                     kube-system
	1e29f99318226       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   4bd72ca464c0d       storage-provisioner                          kube-system
	9fda1a1a459be       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   6bc48e3f45f89       kube-proxy-9n98q                             kube-system
	1cccce7e05977       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   891bc1ceeac67       kindnet-tntfx                                kube-system
	f434ab481e29f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   67acf4c855306       etcd-embed-certs-055175                      kube-system
	3c439d3df1243       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   91337a312d2d3       kube-controller-manager-embed-certs-055175   kube-system
	d08ee7033be47       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   4f777c5d4bc7a       kube-apiserver-embed-certs-055175            kube-system
	44d1756792d93       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   2ca67afb70a0e       kube-scheduler-embed-certs-055175            kube-system
	
	
	==> coredns [f736f894e2029adb1b4a8e91a72c483640a0eadaabe0c824671fac61402266c2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58846 - 8099 "HINFO IN 4005917662410033543.3804357063060511515. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022255496s
	
	
	==> describe nodes <==
	Name:               embed-certs-055175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-055175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=embed-certs-055175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_44_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:44:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-055175
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:45:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:45:00 +0000   Sat, 18 Oct 2025 09:44:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:45:00 +0000   Sat, 18 Oct 2025 09:44:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:45:00 +0000   Sat, 18 Oct 2025 09:44:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:45:00 +0000   Sat, 18 Oct 2025 09:45:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-055175
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                a753bc03-5449-4387-b526-2cbb885beb79
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-ksdf9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-embed-certs-055175                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-tntfx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-055175             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-055175    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-9n98q                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-055175             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node embed-certs-055175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node embed-certs-055175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node embed-certs-055175 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node embed-certs-055175 event: Registered Node embed-certs-055175 in Controller
	  Normal  NodeReady                13s   kubelet          Node embed-certs-055175 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [f434ab481e29fd4882500483ad347f55f2eaa34bb73223f1127adec16cde1c9e] <==
	{"level":"warn","ts":"2025-10-18T09:44:40.776165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:44:40.784100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:44:40.792206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:44:40.799876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:44:40.807343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:44:40.814040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:44:40.822793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:44:40.830274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:44:40.838381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:44:40.844953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:44:40.852142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:44:40.860913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:44:40.876606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:44:40.885852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:44:40.894156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:44:40.962867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34428","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-18T09:44:56.640820Z","caller":"traceutil/trace.go:172","msg":"trace[1894861441] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"156.703143ms","start":"2025-10-18T09:44:56.484099Z","end":"2025-10-18T09:44:56.640803Z","steps":["trace[1894861441] 'process raft request'  (duration: 156.557411ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:44:57.435413Z","caller":"traceutil/trace.go:172","msg":"trace[1326585540] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"132.157498ms","start":"2025-10-18T09:44:57.303224Z","end":"2025-10-18T09:44:57.435382Z","steps":["trace[1326585540] 'process raft request'  (duration: 131.858625ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:44:57.580126Z","caller":"traceutil/trace.go:172","msg":"trace[1941010387] linearizableReadLoop","detail":"{readStateIndex:401; appliedIndex:401; }","duration":"102.118276ms","start":"2025-10-18T09:44:57.477983Z","end":"2025-10-18T09:44:57.580101Z","steps":["trace[1941010387] 'read index received'  (duration: 102.107579ms)","trace[1941010387] 'applied index is now lower than readState.Index'  (duration: 9.129µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:44:57.689657Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"211.622486ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T09:44:57.689755Z","caller":"traceutil/trace.go:172","msg":"trace[287486236] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:389; }","duration":"211.761118ms","start":"2025-10-18T09:44:57.477974Z","end":"2025-10-18T09:44:57.689735Z","steps":["trace[287486236] 'agreement among raft nodes before linearized reading'  (duration: 102.220743ms)","trace[287486236] 'range keys from in-memory index tree'  (duration: 109.376962ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:44:57.690309Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.613807ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356040967982529 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-055175\" mod_revision:389 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-055175\" value_size:7215 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-055175\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T09:44:57.690415Z","caller":"traceutil/trace.go:172","msg":"trace[1700078530] transaction","detail":"{read_only:false; response_revision:390; number_of_response:1; }","duration":"246.143986ms","start":"2025-10-18T09:44:57.444250Z","end":"2025-10-18T09:44:57.690394Z","steps":["trace[1700078530] 'process raft request'  (duration: 135.88283ms)","trace[1700078530] 'compare'  (duration: 109.508124ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-18T09:44:57.952810Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"152.937912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-18T09:44:57.952897Z","caller":"traceutil/trace.go:172","msg":"trace[1050665754] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:390; }","duration":"153.039426ms","start":"2025-10-18T09:44:57.799843Z","end":"2025-10-18T09:44:57.952882Z","steps":["trace[1050665754] 'range keys from in-memory index tree'  (duration: 152.851956ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:45:13 up  1:27,  0 user,  load average: 3.32, 2.96, 1.89
	Linux embed-certs-055175 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1cccce7e0597732ab7cdbd855c526899a3586f554009c767dcc732a1b2e133a4] <==
	I1018 09:44:49.976275       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:44:49.976706       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:44:49.976871       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:44:49.976892       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:44:49.976904       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:44:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:44:50.174572       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:44:50.174608       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:44:50.258903       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:44:50.259184       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:44:50.559053       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:44:50.559083       1 metrics.go:72] Registering metrics
	I1018 09:44:50.559134       1 controller.go:711] "Syncing nftables rules"
	I1018 09:45:00.175958       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:45:00.176033       1 main.go:301] handling current node
	I1018 09:45:10.176938       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:45:10.177033       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d08ee7033be47eebddef1ebd3756d489080c694f3df8acc408eb515d0fc422eb] <==
	I1018 09:44:41.541088       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1018 09:44:41.542868       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1018 09:44:41.553430       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:44:41.553456       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 09:44:41.559565       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:44:41.561994       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:44:41.690467       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:44:42.399537       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:44:42.403692       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:44:42.403714       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:44:42.974133       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:44:43.019915       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:44:43.108577       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:44:43.115740       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1018 09:44:43.117122       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:44:43.122438       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:44:43.597935       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:44:44.329593       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:44:44.340752       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:44:44.350209       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:44:49.252425       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:44:49.352387       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 09:44:49.704562       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:44:49.710887       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1018 09:45:11.454073       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:37238: use of closed network connection
	
	
	==> kube-controller-manager [3c439d3df124389446c56ac9277f241f4371b6d8955e5d8146856b5be1954df9] <==
	I1018 09:44:48.597707       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:44:48.597736       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:44:48.597769       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:44:48.597850       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-055175"
	I1018 09:44:48.597916       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 09:44:48.598975       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:44:48.598993       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:44:48.599027       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:44:48.599058       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:44:48.599104       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:44:48.599128       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:44:48.599145       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:44:48.599279       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:44:48.599330       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 09:44:48.599386       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:44:48.599398       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:44:48.599409       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:44:48.599519       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:44:48.601709       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:44:48.604875       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:44:48.608179       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 09:44:48.608191       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:44:48.619543       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:44:48.621016       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:45:03.599539       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9fda1a1a459be862d223f240e68c714bc5c8e2277830d86103df4d006cf3e49c] <==
	I1018 09:44:49.791551       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:44:49.852614       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:44:49.954332       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:44:49.954392       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:44:49.954498       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:44:49.985565       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:44:49.985624       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:44:49.992438       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:44:49.993244       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:44:49.993273       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:44:49.994913       1 config.go:200] "Starting service config controller"
	I1018 09:44:49.994937       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:44:49.995414       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:44:49.995424       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:44:49.995440       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:44:49.995445       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:44:49.995786       1 config.go:309] "Starting node config controller"
	I1018 09:44:49.995803       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:44:49.995810       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:44:50.095192       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:44:50.095859       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:44:50.096502       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [44d1756792d9303c1951feaf09a99a4a9bac7f0d8a19d947a47aedb6dbb6ec43] <==
	E1018 09:44:41.467303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:44:41.466432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:44:41.467683       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:44:41.467893       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:44:41.467972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:44:41.468255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:44:41.468691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:44:41.468944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:44:41.469173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:44:41.469251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:44:42.308990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:44:42.315262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:44:42.346670       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:44:42.374882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:44:42.518633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:44:42.542635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:44:42.570678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:44:42.617279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:44:42.665044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:44:42.688022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:44:42.696213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:44:42.771013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:44:42.788479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:44:42.815794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1018 09:44:45.964308       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:44:45 embed-certs-055175 kubelet[1348]: I1018 09:44:45.268289    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-055175" podStartSLOduration=1.268268341 podStartE2EDuration="1.268268341s" podCreationTimestamp="2025-10-18 09:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:44:45.258879655 +0000 UTC m=+1.176689048" watchObservedRunningTime="2025-10-18 09:44:45.268268341 +0000 UTC m=+1.186077733"
	Oct 18 09:44:45 embed-certs-055175 kubelet[1348]: I1018 09:44:45.281393    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-055175" podStartSLOduration=2.281370278 podStartE2EDuration="2.281370278s" podCreationTimestamp="2025-10-18 09:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:44:45.281224032 +0000 UTC m=+1.199033427" watchObservedRunningTime="2025-10-18 09:44:45.281370278 +0000 UTC m=+1.199179670"
	Oct 18 09:44:45 embed-certs-055175 kubelet[1348]: I1018 09:44:45.281604    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-055175" podStartSLOduration=1.28156682 podStartE2EDuration="1.28156682s" podCreationTimestamp="2025-10-18 09:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:44:45.268506684 +0000 UTC m=+1.186316078" watchObservedRunningTime="2025-10-18 09:44:45.28156682 +0000 UTC m=+1.199376214"
	Oct 18 09:44:45 embed-certs-055175 kubelet[1348]: I1018 09:44:45.292213    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-055175" podStartSLOduration=1.292196685 podStartE2EDuration="1.292196685s" podCreationTimestamp="2025-10-18 09:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:44:45.292062608 +0000 UTC m=+1.209872001" watchObservedRunningTime="2025-10-18 09:44:45.292196685 +0000 UTC m=+1.210006078"
	Oct 18 09:44:48 embed-certs-055175 kubelet[1348]: I1018 09:44:48.642913    1348 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 09:44:48 embed-certs-055175 kubelet[1348]: I1018 09:44:48.643755    1348 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 09:44:49 embed-certs-055175 kubelet[1348]: I1018 09:44:49.425519    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f7f70a88-1903-43e5-a76f-2206c4e3df79-cni-cfg\") pod \"kindnet-tntfx\" (UID: \"f7f70a88-1903-43e5-a76f-2206c4e3df79\") " pod="kube-system/kindnet-tntfx"
	Oct 18 09:44:49 embed-certs-055175 kubelet[1348]: I1018 09:44:49.425570    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qr9k\" (UniqueName: \"kubernetes.io/projected/f7f70a88-1903-43e5-a76f-2206c4e3df79-kube-api-access-5qr9k\") pod \"kindnet-tntfx\" (UID: \"f7f70a88-1903-43e5-a76f-2206c4e3df79\") " pod="kube-system/kindnet-tntfx"
	Oct 18 09:44:49 embed-certs-055175 kubelet[1348]: I1018 09:44:49.425591    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c9c0f79-f699-4305-8423-c0863f443b78-kube-proxy\") pod \"kube-proxy-9n98q\" (UID: \"5c9c0f79-f699-4305-8423-c0863f443b78\") " pod="kube-system/kube-proxy-9n98q"
	Oct 18 09:44:49 embed-certs-055175 kubelet[1348]: I1018 09:44:49.425606    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c9c0f79-f699-4305-8423-c0863f443b78-xtables-lock\") pod \"kube-proxy-9n98q\" (UID: \"5c9c0f79-f699-4305-8423-c0863f443b78\") " pod="kube-system/kube-proxy-9n98q"
	Oct 18 09:44:49 embed-certs-055175 kubelet[1348]: I1018 09:44:49.425687    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c9c0f79-f699-4305-8423-c0863f443b78-lib-modules\") pod \"kube-proxy-9n98q\" (UID: \"5c9c0f79-f699-4305-8423-c0863f443b78\") " pod="kube-system/kube-proxy-9n98q"
	Oct 18 09:44:49 embed-certs-055175 kubelet[1348]: I1018 09:44:49.425736    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4vdm\" (UniqueName: \"kubernetes.io/projected/5c9c0f79-f699-4305-8423-c0863f443b78-kube-api-access-f4vdm\") pod \"kube-proxy-9n98q\" (UID: \"5c9c0f79-f699-4305-8423-c0863f443b78\") " pod="kube-system/kube-proxy-9n98q"
	Oct 18 09:44:49 embed-certs-055175 kubelet[1348]: I1018 09:44:49.425876    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7f70a88-1903-43e5-a76f-2206c4e3df79-xtables-lock\") pod \"kindnet-tntfx\" (UID: \"f7f70a88-1903-43e5-a76f-2206c4e3df79\") " pod="kube-system/kindnet-tntfx"
	Oct 18 09:44:49 embed-certs-055175 kubelet[1348]: I1018 09:44:49.425922    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7f70a88-1903-43e5-a76f-2206c4e3df79-lib-modules\") pod \"kindnet-tntfx\" (UID: \"f7f70a88-1903-43e5-a76f-2206c4e3df79\") " pod="kube-system/kindnet-tntfx"
	Oct 18 09:44:50 embed-certs-055175 kubelet[1348]: I1018 09:44:50.277867    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9n98q" podStartSLOduration=1.277838011 podStartE2EDuration="1.277838011s" podCreationTimestamp="2025-10-18 09:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:44:50.262842002 +0000 UTC m=+6.180651400" watchObservedRunningTime="2025-10-18 09:44:50.277838011 +0000 UTC m=+6.195647400"
	Oct 18 09:44:50 embed-certs-055175 kubelet[1348]: I1018 09:44:50.293413    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tntfx" podStartSLOduration=1.2933928749999999 podStartE2EDuration="1.293392875s" podCreationTimestamp="2025-10-18 09:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:44:50.279140754 +0000 UTC m=+6.196950147" watchObservedRunningTime="2025-10-18 09:44:50.293392875 +0000 UTC m=+6.211202277"
	Oct 18 09:45:00 embed-certs-055175 kubelet[1348]: I1018 09:45:00.756272    1348 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 09:45:00 embed-certs-055175 kubelet[1348]: I1018 09:45:00.904811    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1d121276-430c-41af-a2b6-542d426c43dc-tmp\") pod \"storage-provisioner\" (UID: \"1d121276-430c-41af-a2b6-542d426c43dc\") " pod="kube-system/storage-provisioner"
	Oct 18 09:45:00 embed-certs-055175 kubelet[1348]: I1018 09:45:00.904898    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88ts4\" (UniqueName: \"kubernetes.io/projected/1d121276-430c-41af-a2b6-542d426c43dc-kube-api-access-88ts4\") pod \"storage-provisioner\" (UID: \"1d121276-430c-41af-a2b6-542d426c43dc\") " pod="kube-system/storage-provisioner"
	Oct 18 09:45:00 embed-certs-055175 kubelet[1348]: I1018 09:45:00.904931    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ba2449a3-fc94-49e2-9e00-868003d349b1-config-volume\") pod \"coredns-66bc5c9577-ksdf9\" (UID: \"ba2449a3-fc94-49e2-9e00-868003d349b1\") " pod="kube-system/coredns-66bc5c9577-ksdf9"
	Oct 18 09:45:00 embed-certs-055175 kubelet[1348]: I1018 09:45:00.904955    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fmpn\" (UniqueName: \"kubernetes.io/projected/ba2449a3-fc94-49e2-9e00-868003d349b1-kube-api-access-8fmpn\") pod \"coredns-66bc5c9577-ksdf9\" (UID: \"ba2449a3-fc94-49e2-9e00-868003d349b1\") " pod="kube-system/coredns-66bc5c9577-ksdf9"
	Oct 18 09:45:01 embed-certs-055175 kubelet[1348]: I1018 09:45:01.296284    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ksdf9" podStartSLOduration=12.296256583 podStartE2EDuration="12.296256583s" podCreationTimestamp="2025-10-18 09:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:45:01.285314005 +0000 UTC m=+17.203123397" watchObservedRunningTime="2025-10-18 09:45:01.296256583 +0000 UTC m=+17.214065976"
	Oct 18 09:45:01 embed-certs-055175 kubelet[1348]: I1018 09:45:01.307681    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.307655322 podStartE2EDuration="11.307655322s" podCreationTimestamp="2025-10-18 09:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:45:01.296478952 +0000 UTC m=+17.214288344" watchObservedRunningTime="2025-10-18 09:45:01.307655322 +0000 UTC m=+17.225464716"
	Oct 18 09:45:03 embed-certs-055175 kubelet[1348]: I1018 09:45:03.421129    1348 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7skh\" (UniqueName: \"kubernetes.io/projected/cbc79bc0-bf43-48ca-a6bc-937aa2d7fc9c-kube-api-access-d7skh\") pod \"busybox\" (UID: \"cbc79bc0-bf43-48ca-a6bc-937aa2d7fc9c\") " pod="default/busybox"
	Oct 18 09:45:06 embed-certs-055175 kubelet[1348]: I1018 09:45:06.297478    1348 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.195469607 podStartE2EDuration="3.297458992s" podCreationTimestamp="2025-10-18 09:45:03 +0000 UTC" firstStartedPulling="2025-10-18 09:45:03.664417781 +0000 UTC m=+19.582227160" lastFinishedPulling="2025-10-18 09:45:05.766407152 +0000 UTC m=+21.684216545" observedRunningTime="2025-10-18 09:45:06.297187871 +0000 UTC m=+22.214997264" watchObservedRunningTime="2025-10-18 09:45:06.297458992 +0000 UTC m=+22.215268384"
	
	
	==> storage-provisioner [1e29f9931822637963274b7980f9b7ce010da5f58b6da925422e78d10b718537] <==
	I1018 09:45:01.160715       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:45:01.173898       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:45:01.173965       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:45:01.178182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:01.183712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:45:01.183963       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:45:01.184124       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2f02da72-07ef-40c7-b357-1999f0a74d4d", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-055175_4054dbbb-9a73-4415-b396-03f515e5cfc0 became leader
	I1018 09:45:01.184305       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-055175_4054dbbb-9a73-4415-b396-03f515e5cfc0!
	W1018 09:45:01.186734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:01.191214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:45:01.285270       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-055175_4054dbbb-9a73-4415-b396-03f515e5cfc0!
	W1018 09:45:03.196316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:03.202009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:05.205888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:05.210626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:07.214374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:07.220699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:09.224220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:09.228227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:11.232065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:11.236899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:13.240669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:13.245078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-055175 -n embed-certs-055175
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-055175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-708733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-708733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (232.467387ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:45:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-708733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-708733
helpers_test.go:243: (dbg) docker inspect newest-cni-708733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475",
	        "Created": "2025-10-18T09:44:58.376755553Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 382697,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:44:58.42036996Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475/hostname",
	        "HostsPath": "/var/lib/docker/containers/589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475/hosts",
	        "LogPath": "/var/lib/docker/containers/589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475/589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475-json.log",
	        "Name": "/newest-cni-708733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-708733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-708733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475",
	                "LowerDir": "/var/lib/docker/overlay2/ffa4278b5a07c15f6b720f56ebd3b4e4ff8e546aecc435abdd6e9191da7093aa-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ffa4278b5a07c15f6b720f56ebd3b4e4ff8e546aecc435abdd6e9191da7093aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ffa4278b5a07c15f6b720f56ebd3b4e4ff8e546aecc435abdd6e9191da7093aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ffa4278b5a07c15f6b720f56ebd3b4e4ff8e546aecc435abdd6e9191da7093aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-708733",
	                "Source": "/var/lib/docker/volumes/newest-cni-708733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-708733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-708733",
	                "name.minikube.sigs.k8s.io": "newest-cni-708733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fdd524f875edebf726dff9033cfbc7a1a95262f0c60eace275e28216dca03b6f",
	            "SandboxKey": "/var/run/docker/netns/fdd524f875ed",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33211"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33212"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33215"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33213"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33214"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-708733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fe:06:19:ff:b6:ef",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1aaffc18dfa2904bed47c15aa8ec5d5036ec16333dc17a28b2beac767bfe6ebf",
	                    "EndpointID": "e9c73a7ccccf0b3f85a961157b433e71d8776a183983e26d71440ab5db8d7a35",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-708733",
	                        "589c5abc3dda"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708733 -n newest-cni-708733
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-708733 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p old-k8s-version-619885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	│ stop    │ -p old-k8s-version-619885 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-589869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │                     │
	│ stop    │ -p no-preload-589869 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-619885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p old-k8s-version-619885 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:44 UTC │
	│ addons  │ enable dashboard -p no-preload-589869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:44 UTC │
	│ image   │ old-k8s-version-619885 image list --format=json                                                                                                                                                                                               │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ pause   │ -p old-k8s-version-619885 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ delete  │ -p old-k8s-version-619885                                                                                                                                                                                                                     │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p old-k8s-version-619885                                                                                                                                                                                                                     │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ start   │ -p embed-certs-055175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:45 UTC │
	│ image   │ no-preload-589869 image list --format=json                                                                                                                                                                                                    │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ pause   │ -p no-preload-589869 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ start   │ -p cert-expiration-650496 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-650496       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p no-preload-589869                                                                                                                                                                                                                          │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p cert-expiration-650496                                                                                                                                                                                                                     │ cert-expiration-650496       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p no-preload-589869                                                                                                                                                                                                                          │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p disable-driver-mounts-399936                                                                                                                                                                                                               │ disable-driver-mounts-399936 │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ start   │ -p default-k8s-diff-port-942905 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ start   │ -p newest-cni-708733 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable metrics-server -p embed-certs-055175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ stop    │ -p embed-certs-055175 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-708733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:44:50
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:44:50.689962  381291 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:44:50.690289  381291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:44:50.690297  381291 out.go:374] Setting ErrFile to fd 2...
	I1018 09:44:50.690303  381291 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:44:50.690624  381291 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:44:50.691332  381291 out.go:368] Setting JSON to false
	I1018 09:44:50.692657  381291 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5235,"bootTime":1760775456,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:44:50.692760  381291 start.go:141] virtualization: kvm guest
	I1018 09:44:50.694521  381291 out.go:179] * [newest-cni-708733] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:44:50.695818  381291 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:44:50.695836  381291 notify.go:220] Checking for updates...
	I1018 09:44:50.697124  381291 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:44:50.698668  381291 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:44:50.700646  381291 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:44:50.701958  381291 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:44:50.703380  381291 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:44:50.655938  381160 start.go:305] selected driver: docker
	I1018 09:44:50.655957  381160 start.go:925] validating driver "docker" against <nil>
	I1018 09:44:50.655968  381160 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:44:50.656543  381160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:44:50.723181  381160 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-10-18 09:44:50.711410722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:44:50.723423  381160 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:44:50.723752  381160 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:44:50.727098  381160 out.go:179] * Using Docker driver with root privileges
	I1018 09:44:50.728279  381160 cni.go:84] Creating CNI manager for ""
	I1018 09:44:50.728370  381160 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:44:50.728388  381160 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:44:50.728469  381160 start.go:349] cluster config:
	{Name:default-k8s-diff-port-942905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-942905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:44:50.730134  381160 out.go:179] * Starting "default-k8s-diff-port-942905" primary control-plane node in "default-k8s-diff-port-942905" cluster
	I1018 09:44:50.731319  381160 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:44:50.732546  381160 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:44:50.733542  381160 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:44:50.733576  381160 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:44:50.733584  381160 cache.go:58] Caching tarball of preloaded images
	I1018 09:44:50.733638  381160 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:44:50.733673  381160 preload.go:233] Found /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:44:50.733685  381160 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:44:50.733790  381160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/config.json ...
	I1018 09:44:50.733811  381160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/config.json: {Name:mk9ab3c164f844e1cc3bc862b6f6cb43b25e383b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:44:50.756198  381160 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:44:50.756227  381160 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:44:50.756243  381160 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:44:50.756272  381160 start.go:360] acquireMachinesLock for default-k8s-diff-port-942905: {Name:mk8b7fe5fa5304418be28440581999707ea8535f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:44:50.756386  381160 start.go:364] duration metric: took 90.378µs to acquireMachinesLock for "default-k8s-diff-port-942905"
	I1018 09:44:50.756417  381160 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-942905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-942905 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:44:50.756498  381160 start.go:125] createHost starting for "" (driver="docker")
	I1018 09:44:50.705612  381291 config.go:182] Loaded profile config "embed-certs-055175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:44:50.705746  381291 config.go:182] Loaded profile config "kubernetes-upgrade-919613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:44:50.705896  381291 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:44:50.731967  381291 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:44:50.732095  381291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:44:50.795538  381291 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-18 09:44:50.785466804 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:44:50.795667  381291 docker.go:318] overlay module found
	I1018 09:44:50.797214  381291 out.go:179] * Using the docker driver based on user configuration
	I1018 09:44:50.798354  381291 start.go:305] selected driver: docker
	I1018 09:44:50.798368  381291 start.go:925] validating driver "docker" against <nil>
	I1018 09:44:50.798381  381291 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:44:50.799159  381291 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:44:50.860410  381291 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:59 OomKillDisable:false NGoroutines:69 SystemTime:2025-10-18 09:44:50.848302273 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:44:50.860623  381291 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1018 09:44:50.860665  381291 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1018 09:44:50.860957  381291 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:44:50.862844  381291 out.go:179] * Using Docker driver with root privileges
	I1018 09:44:50.864893  381291 cni.go:84] Creating CNI manager for ""
	I1018 09:44:50.864958  381291 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:44:50.864969  381291 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 09:44:50.865027  381291 start.go:349] cluster config:
	{Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:44:50.866389  381291 out.go:179] * Starting "newest-cni-708733" primary control-plane node in "newest-cni-708733" cluster
	I1018 09:44:50.868222  381291 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:44:50.869335  381291 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:44:50.870399  381291 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:44:50.870438  381291 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:44:50.870449  381291 cache.go:58] Caching tarball of preloaded images
	I1018 09:44:50.870525  381291 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:44:50.870541  381291 preload.go:233] Found /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:44:50.870658  381291 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:44:50.870759  381291 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/config.json ...
	I1018 09:44:50.870787  381291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/config.json: {Name:mk20297a5c5ed1235f19ad5750426d4c2b3e1e56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:44:50.892160  381291 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:44:50.892181  381291 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:44:50.892197  381291 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:44:50.892225  381291 start.go:360] acquireMachinesLock for newest-cni-708733: {Name:mkb1aaee475623ac79c9cbc5f8d5e2dda34020d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:44:50.892333  381291 start.go:364] duration metric: took 85.321µs to acquireMachinesLock for "newest-cni-708733"
	I1018 09:44:50.892359  381291 start.go:93] Provisioning new machine with config: &{Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:44:50.892461  381291 start.go:125] createHost starting for "" (driver="docker")
	I1018 09:44:50.411644  373771 addons.go:514] duration metric: took 521.349898ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:44:50.690794  373771 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-055175" context rescaled to 1 replicas
	W1018 09:44:52.191073  373771 node_ready.go:57] node "embed-certs-055175" has "Ready":"False" status (will retry)
	I1018 09:44:48.854174  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:48.854598  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:48.854651  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:48.854706  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:48.885508  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:48.885532  353123 cri.go:89] found id: ""
	I1018 09:44:48.885540  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:44:48.885596  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:48.889991  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:48.890059  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:48.929152  353123 cri.go:89] found id: ""
	I1018 09:44:48.929181  353123 logs.go:282] 0 containers: []
	W1018 09:44:48.929190  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:48.929195  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:48.929243  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:48.957920  353123 cri.go:89] found id: ""
	I1018 09:44:48.957947  353123 logs.go:282] 0 containers: []
	W1018 09:44:48.957959  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:48.957968  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:48.958033  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:48.989162  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:48.989183  353123 cri.go:89] found id: ""
	I1018 09:44:48.989190  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:48.989251  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:48.993357  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:48.993430  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:49.021975  353123 cri.go:89] found id: ""
	I1018 09:44:49.022002  353123 logs.go:282] 0 containers: []
	W1018 09:44:49.022012  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:49.022020  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:49.022076  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:49.049353  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:49.049379  353123 cri.go:89] found id: "7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:49.049384  353123 cri.go:89] found id: ""
	I1018 09:44:49.049394  353123 logs.go:282] 2 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7]
	I1018 09:44:49.049455  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:49.053550  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:49.057141  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:49.057204  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:49.083741  353123 cri.go:89] found id: ""
	I1018 09:44:49.083766  353123 logs.go:282] 0 containers: []
	W1018 09:44:49.083790  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:49.083798  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:49.083871  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:49.117190  353123 cri.go:89] found id: ""
	I1018 09:44:49.117218  353123 logs.go:282] 0 containers: []
	W1018 09:44:49.117239  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:49.117261  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:49.117279  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:49.166584  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:49.166619  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:49.227918  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:49.227941  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:44:49.227958  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:49.261790  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:44:49.261886  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:49.296838  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:49.296872  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:49.334167  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:49.334200  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:49.450870  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:49.450912  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:49.470257  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:49.470286  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:49.523546  353123 logs.go:123] Gathering logs for kube-controller-manager [7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7] ...
	I1018 09:44:49.523577  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7816d8f8e3030d2cfc6ced502f8c4eae9d9cb8e55dfe8c4da17a4f3d6efd3fe7"
	I1018 09:44:52.058905  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:52.060977  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:52.061040  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:52.061100  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:52.103424  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:52.103446  353123 cri.go:89] found id: ""
	I1018 09:44:52.103456  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:44:52.103527  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:52.108367  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:52.108434  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:52.136327  353123 cri.go:89] found id: ""
	I1018 09:44:52.136356  353123 logs.go:282] 0 containers: []
	W1018 09:44:52.136367  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:52.136375  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:52.136437  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:52.168011  353123 cri.go:89] found id: ""
	I1018 09:44:52.168038  353123 logs.go:282] 0 containers: []
	W1018 09:44:52.168049  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:52.168056  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:52.168122  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:52.198850  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:52.198872  353123 cri.go:89] found id: ""
	I1018 09:44:52.198881  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:52.198940  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:52.202937  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:52.203005  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:52.236765  353123 cri.go:89] found id: ""
	I1018 09:44:52.236795  353123 logs.go:282] 0 containers: []
	W1018 09:44:52.236807  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:52.236816  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:52.236915  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:52.268756  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:52.268788  353123 cri.go:89] found id: ""
	I1018 09:44:52.268800  353123 logs.go:282] 1 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:44:52.268892  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:52.273081  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:52.273159  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:52.301228  353123 cri.go:89] found id: ""
	I1018 09:44:52.301257  353123 logs.go:282] 0 containers: []
	W1018 09:44:52.301268  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:52.301276  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:52.301342  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:52.333783  353123 cri.go:89] found id: ""
	I1018 09:44:52.333834  353123 logs.go:282] 0 containers: []
	W1018 09:44:52.333846  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:52.333858  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:52.333875  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:52.383815  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:52.383877  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:52.422634  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:52.422664  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:52.533223  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:52.533265  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:52.552549  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:52.552581  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:52.626607  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:52.626631  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:44:52.626647  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:52.664502  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:52.664556  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:52.719127  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:44:52.719168  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:50.761803  381160 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 09:44:50.762168  381160 start.go:159] libmachine.API.Create for "default-k8s-diff-port-942905" (driver="docker")
	I1018 09:44:50.762216  381160 client.go:168] LocalClient.Create starting
	I1018 09:44:50.762299  381160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem
	I1018 09:44:50.762346  381160 main.go:141] libmachine: Decoding PEM data...
	I1018 09:44:50.762373  381160 main.go:141] libmachine: Parsing certificate...
	I1018 09:44:50.762459  381160 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem
	I1018 09:44:50.762491  381160 main.go:141] libmachine: Decoding PEM data...
	I1018 09:44:50.762517  381160 main.go:141] libmachine: Parsing certificate...
	I1018 09:44:50.763036  381160 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-942905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:44:50.785386  381160 cli_runner.go:211] docker network inspect default-k8s-diff-port-942905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:44:50.785469  381160 network_create.go:284] running [docker network inspect default-k8s-diff-port-942905] to gather additional debugging logs...
	I1018 09:44:50.785501  381160 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-942905
	W1018 09:44:50.803414  381160 cli_runner.go:211] docker network inspect default-k8s-diff-port-942905 returned with exit code 1
	I1018 09:44:50.803439  381160 network_create.go:287] error running [docker network inspect default-k8s-diff-port-942905]: docker network inspect default-k8s-diff-port-942905: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-942905 not found
	I1018 09:44:50.803452  381160 network_create.go:289] output of [docker network inspect default-k8s-diff-port-942905]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-942905 not found
	
	** /stderr **
	I1018 09:44:50.803568  381160 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:44:50.825218  381160 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-feaba2b58bf0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:21:45:2e:39:fd} reservation:<nil>}
	I1018 09:44:50.825817  381160 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-159189dc4cae IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:66:7d:df:93:8a:95} reservation:<nil>}
	I1018 09:44:50.826366  381160 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-34d26817ecdb IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:18:1e:90:91:d0} reservation:<nil>}
	I1018 09:44:50.826668  381160 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7d2dbeb8dc9f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:62:9b:70:ff:9e:fe} reservation:<nil>}
	I1018 09:44:50.827249  381160 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-de47eb429c53 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ea:6f:ec:e2:71:9d} reservation:<nil>}
	I1018 09:44:50.828084  381160 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d89850}
	I1018 09:44:50.828112  381160 network_create.go:124] attempt to create docker network default-k8s-diff-port-942905 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1018 09:44:50.828172  381160 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-942905 default-k8s-diff-port-942905
	I1018 09:44:50.891628  381160 network_create.go:108] docker network default-k8s-diff-port-942905 192.168.94.0/24 created
	I1018 09:44:50.891656  381160 kic.go:121] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-942905" container
	I1018 09:44:50.891716  381160 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:44:50.911268  381160 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-942905 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-942905 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:44:50.932632  381160 oci.go:103] Successfully created a docker volume default-k8s-diff-port-942905
	I1018 09:44:50.932772  381160 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-942905-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-942905 --entrypoint /usr/bin/test -v default-k8s-diff-port-942905:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:44:51.344903  381160 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-942905
	I1018 09:44:51.344955  381160 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:44:51.344981  381160 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 09:44:51.345068  381160 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-942905:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1018 09:44:50.894093  381291 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1018 09:44:50.894331  381291 start.go:159] libmachine.API.Create for "newest-cni-708733" (driver="docker")
	I1018 09:44:50.894364  381291 client.go:168] LocalClient.Create starting
	I1018 09:44:50.894422  381291 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem
	I1018 09:44:50.894460  381291 main.go:141] libmachine: Decoding PEM data...
	I1018 09:44:50.894476  381291 main.go:141] libmachine: Parsing certificate...
	I1018 09:44:50.894553  381291 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem
	I1018 09:44:50.894584  381291 main.go:141] libmachine: Decoding PEM data...
	I1018 09:44:50.894602  381291 main.go:141] libmachine: Parsing certificate...
	I1018 09:44:50.895030  381291 cli_runner.go:164] Run: docker network inspect newest-cni-708733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1018 09:44:50.914868  381291 cli_runner.go:211] docker network inspect newest-cni-708733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1018 09:44:50.914941  381291 network_create.go:284] running [docker network inspect newest-cni-708733] to gather additional debugging logs...
	I1018 09:44:50.914967  381291 cli_runner.go:164] Run: docker network inspect newest-cni-708733
	W1018 09:44:50.933906  381291 cli_runner.go:211] docker network inspect newest-cni-708733 returned with exit code 1
	I1018 09:44:50.933948  381291 network_create.go:287] error running [docker network inspect newest-cni-708733]: docker network inspect newest-cni-708733: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-708733 not found
	I1018 09:44:50.933963  381291 network_create.go:289] output of [docker network inspect newest-cni-708733]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-708733 not found
	
	** /stderr **
	I1018 09:44:50.934151  381291 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:44:50.952353  381291 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-feaba2b58bf0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:21:45:2e:39:fd} reservation:<nil>}
	I1018 09:44:50.953026  381291 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-159189dc4cae IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:66:7d:df:93:8a:95} reservation:<nil>}
	I1018 09:44:50.953604  381291 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-34d26817ecdb IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:18:1e:90:91:d0} reservation:<nil>}
	I1018 09:44:50.953950  381291 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7d2dbeb8dc9f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:62:9b:70:ff:9e:fe} reservation:<nil>}
	I1018 09:44:50.954528  381291 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-de47eb429c53 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ea:6f:ec:e2:71:9d} reservation:<nil>}
	I1018 09:44:50.955055  381291 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-0fd78e2b1cc4 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:4a:53:cb:95:ba:9d} reservation:<nil>}
	I1018 09:44:50.955759  381291 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e76aa0}
	I1018 09:44:50.955786  381291 network_create.go:124] attempt to create docker network newest-cni-708733 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1018 09:44:50.955871  381291 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-708733 newest-cni-708733
	I1018 09:44:51.020118  381291 network_create.go:108] docker network newest-cni-708733 192.168.103.0/24 created
	I1018 09:44:51.020149  381291 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-708733" container
	I1018 09:44:51.020201  381291 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1018 09:44:51.038937  381291 cli_runner.go:164] Run: docker volume create newest-cni-708733 --label name.minikube.sigs.k8s.io=newest-cni-708733 --label created_by.minikube.sigs.k8s.io=true
	I1018 09:44:51.059729  381291 oci.go:103] Successfully created a docker volume newest-cni-708733
	I1018 09:44:51.059811  381291 cli_runner.go:164] Run: docker run --rm --name newest-cni-708733-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-708733 --entrypoint /usr/bin/test -v newest-cni-708733:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1018 09:44:51.480607  381291 oci.go:107] Successfully prepared a docker volume newest-cni-708733
	I1018 09:44:51.480663  381291 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:44:51.480688  381291 kic.go:194] Starting extracting preloaded images to volume ...
	I1018 09:44:51.480777  381291 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-708733:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1018 09:44:54.739779  373771 node_ready.go:57] node "embed-certs-055175" has "Ready":"False" status (will retry)
	W1018 09:44:56.744791  373771 node_ready.go:57] node "embed-certs-055175" has "Ready":"False" status (will retry)
	I1018 09:44:55.251736  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:55.252176  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:55.252232  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:55.252291  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:55.279770  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:55.279808  353123 cri.go:89] found id: ""
	I1018 09:44:55.279831  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:44:55.279888  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:55.283764  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:55.283877  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:55.310174  353123 cri.go:89] found id: ""
	I1018 09:44:55.310200  353123 logs.go:282] 0 containers: []
	W1018 09:44:55.310212  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:55.310220  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:55.310283  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:55.336491  353123 cri.go:89] found id: ""
	I1018 09:44:55.336516  353123 logs.go:282] 0 containers: []
	W1018 09:44:55.336524  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:55.336530  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:55.336594  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:55.362990  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:55.363016  353123 cri.go:89] found id: ""
	I1018 09:44:55.363026  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:55.363093  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:55.367531  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:55.367608  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:55.393317  353123 cri.go:89] found id: ""
	I1018 09:44:55.393339  353123 logs.go:282] 0 containers: []
	W1018 09:44:55.393347  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:55.393353  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:55.393400  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:55.420073  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:55.420093  353123 cri.go:89] found id: ""
	I1018 09:44:55.420101  353123 logs.go:282] 1 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:44:55.420158  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:55.424059  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:55.424114  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:55.451671  353123 cri.go:89] found id: ""
	I1018 09:44:55.451695  353123 logs.go:282] 0 containers: []
	W1018 09:44:55.451702  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:55.451709  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:55.451755  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:55.478444  353123 cri.go:89] found id: ""
	I1018 09:44:55.478469  353123 logs.go:282] 0 containers: []
	W1018 09:44:55.478477  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:55.478486  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:44:55.478500  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:55.505264  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:55.505291  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:55.551185  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:55.551218  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:55.581868  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:55.581894  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:55.671081  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:55.671117  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:55.690572  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:55.690612  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:55.750418  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:55.750437  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:44:55.750450  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:55.781300  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:55.781331  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:58.332568  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:44:58.333057  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:44:58.333116  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:44:58.333175  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:44:58.367383  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:44:58.367411  353123 cri.go:89] found id: ""
	I1018 09:44:58.367421  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:44:58.367477  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:58.372128  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:44:58.372310  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:44:58.401796  353123 cri.go:89] found id: ""
	I1018 09:44:58.401853  353123 logs.go:282] 0 containers: []
	W1018 09:44:58.401866  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:44:58.401875  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:44:58.401941  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:44:58.433947  353123 cri.go:89] found id: ""
	I1018 09:44:58.433980  353123 logs.go:282] 0 containers: []
	W1018 09:44:58.433992  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:44:58.434000  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:44:58.434066  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:44:58.464332  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:58.464358  353123 cri.go:89] found id: ""
	I1018 09:44:58.464369  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:44:58.464434  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:58.468752  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:44:58.468855  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:44:58.501219  353123 cri.go:89] found id: ""
	I1018 09:44:58.501270  353123 logs.go:282] 0 containers: []
	W1018 09:44:58.501281  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:44:58.501289  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:44:58.501360  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:44:58.540335  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:58.540359  353123 cri.go:89] found id: ""
	I1018 09:44:58.540369  353123 logs.go:282] 1 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:44:58.540426  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:44:58.545307  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:44:58.545381  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:44:58.573432  353123 cri.go:89] found id: ""
	I1018 09:44:58.573462  353123 logs.go:282] 0 containers: []
	W1018 09:44:58.573471  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:44:58.573477  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:44:58.573522  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:44:58.604321  353123 cri.go:89] found id: ""
	I1018 09:44:58.604353  353123 logs.go:282] 0 containers: []
	W1018 09:44:58.604365  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:44:58.604379  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:44:58.604397  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:44:58.291368  381160 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-942905:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (6.946241785s)
	I1018 09:44:58.291407  381160 kic.go:203] duration metric: took 6.946420512s to extract preloaded images to volume ...
	W1018 09:44:58.291494  381160 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 09:44:58.291543  381160 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 09:44:58.291587  381160 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:44:58.358186  381160 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-942905 --name default-k8s-diff-port-942905 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-942905 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-942905 --network default-k8s-diff-port-942905 --ip 192.168.94.2 --volume default-k8s-diff-port-942905:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:44:58.668690  381160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Running}}
	I1018 09:44:58.693054  381160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Status}}
	I1018 09:44:58.712905  381160 cli_runner.go:164] Run: docker exec default-k8s-diff-port-942905 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:44:58.759488  381160 oci.go:144] the created container "default-k8s-diff-port-942905" has a running status.
	I1018 09:44:58.759536  381160 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa...
	I1018 09:44:59.120033  381160 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:44:59.153002  381160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Status}}
	I1018 09:44:59.179797  381160 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:44:59.179835  381160 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-942905 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:44:59.227960  381160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Status}}
	I1018 09:44:59.251706  381160 machine.go:93] provisionDockerMachine start ...
	I1018 09:44:59.251812  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:44:59.274634  381160 main.go:141] libmachine: Using SSH client type: native
	I1018 09:44:59.275009  381160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33206 <nil> <nil>}
	I1018 09:44:59.275029  381160 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:44:59.417050  381160 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-942905
	
	I1018 09:44:59.417084  381160 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-942905"
	I1018 09:44:59.417150  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:44:59.438561  381160 main.go:141] libmachine: Using SSH client type: native
	I1018 09:44:59.438955  381160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33206 <nil> <nil>}
	I1018 09:44:59.438980  381160 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-942905 && echo "default-k8s-diff-port-942905" | sudo tee /etc/hostname
	I1018 09:44:59.590383  381160 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-942905
	
	I1018 09:44:59.590489  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:44:59.608734  381160 main.go:141] libmachine: Using SSH client type: native
	I1018 09:44:59.609014  381160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33206 <nil> <nil>}
	I1018 09:44:59.609045  381160 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-942905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-942905/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-942905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:44:59.744586  381160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:44:59.744640  381160 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:44:59.744670  381160 ubuntu.go:190] setting up certificates
	I1018 09:44:59.744685  381160 provision.go:84] configureAuth start
	I1018 09:44:59.744747  381160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-942905
	I1018 09:44:59.762856  381160 provision.go:143] copyHostCerts
	I1018 09:44:59.762936  381160 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:44:59.762949  381160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:44:59.763041  381160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:44:59.763192  381160 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:44:59.763209  381160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:44:59.763254  381160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:44:59.763365  381160 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:44:59.763380  381160 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:44:59.763421  381160 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:44:59.763522  381160 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-942905 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-942905 localhost minikube]
	I1018 09:45:00.359137  381160 provision.go:177] copyRemoteCerts
	I1018 09:45:00.359208  381160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:45:00.359255  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:45:00.376601  381160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:45:00.471629  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:45:00.490954  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 09:45:00.508779  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:45:00.527720  381160 provision.go:87] duration metric: took 783.019645ms to configureAuth
	I1018 09:45:00.527744  381160 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:45:00.527928  381160 config.go:182] Loaded profile config "default-k8s-diff-port-942905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:00.528036  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:45:00.545927  381160 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:00.546200  381160 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33206 <nil> <nil>}
	I1018 09:45:00.546218  381160 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:44:58.291901  381291 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-708733:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (6.811079626s)
	I1018 09:44:58.291950  381291 kic.go:203] duration metric: took 6.811257788s to extract preloaded images to volume ...
	W1018 09:44:58.292045  381291 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 09:44:58.292087  381291 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 09:44:58.292133  381291 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:44:58.358184  381291 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-708733 --name newest-cni-708733 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-708733 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-708733 --network newest-cni-708733 --ip 192.168.103.2 --volume newest-cni-708733:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:44:58.789904  381291 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Running}}
	I1018 09:44:58.810672  381291 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:44:58.840588  381291 cli_runner.go:164] Run: docker exec newest-cni-708733 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:44:58.892619  381291 oci.go:144] the created container "newest-cni-708733" has a running status.
	I1018 09:44:58.892654  381291 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa...
	I1018 09:44:59.437020  381291 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:44:59.464248  381291 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:44:59.484885  381291 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:44:59.484909  381291 kic_runner.go:114] Args: [docker exec --privileged newest-cni-708733 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:44:59.531443  381291 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:44:59.551011  381291 machine.go:93] provisionDockerMachine start ...
	I1018 09:44:59.551106  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:44:59.567782  381291 main.go:141] libmachine: Using SSH client type: native
	I1018 09:44:59.568081  381291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1018 09:44:59.568096  381291 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:44:59.701673  381291 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-708733
	
	I1018 09:44:59.701700  381291 ubuntu.go:182] provisioning hostname "newest-cni-708733"
	I1018 09:44:59.701758  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:44:59.719388  381291 main.go:141] libmachine: Using SSH client type: native
	I1018 09:44:59.719681  381291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1018 09:44:59.719704  381291 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-708733 && echo "newest-cni-708733" | sudo tee /etc/hostname
	I1018 09:44:59.870706  381291 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-708733
	
	I1018 09:44:59.870801  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:44:59.890531  381291 main.go:141] libmachine: Using SSH client type: native
	I1018 09:44:59.890745  381291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1018 09:44:59.890763  381291 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-708733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-708733/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-708733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:45:00.024744  381291 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:45:00.024774  381291 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:45:00.024807  381291 ubuntu.go:190] setting up certificates
	I1018 09:45:00.024842  381291 provision.go:84] configureAuth start
	I1018 09:45:00.024902  381291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-708733
	I1018 09:45:00.043035  381291 provision.go:143] copyHostCerts
	I1018 09:45:00.043103  381291 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:45:00.043116  381291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:45:00.043168  381291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:45:00.043275  381291 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:45:00.043285  381291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:45:00.043306  381291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:45:00.043371  381291 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:45:00.043378  381291 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:45:00.043396  381291 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:45:00.043444  381291 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.newest-cni-708733 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-708733]
	I1018 09:45:00.327989  381291 provision.go:177] copyRemoteCerts
	I1018 09:45:00.328049  381291 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:45:00.328084  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:00.347868  381291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:00.444921  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:45:00.464010  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 09:45:00.482098  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:45:00.499378  381291 provision.go:87] duration metric: took 474.517909ms to configureAuth
	I1018 09:45:00.499406  381291 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:45:00.499605  381291 config.go:182] Loaded profile config "newest-cni-708733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:00.499725  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:00.519511  381291 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:00.519721  381291 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33211 <nil> <nil>}
	I1018 09:45:00.519737  381291 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:45:00.771966  381291 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:45:00.771998  381291 machine.go:96] duration metric: took 1.220960491s to provisionDockerMachine
	I1018 09:45:00.772012  381291 client.go:171] duration metric: took 9.877637415s to LocalClient.Create
	I1018 09:45:00.772034  381291 start.go:167] duration metric: took 9.87770527s to libmachine.API.Create "newest-cni-708733"
	I1018 09:45:00.772051  381291 start.go:293] postStartSetup for "newest-cni-708733" (driver="docker")
	I1018 09:45:00.772064  381291 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:45:00.772130  381291 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:45:00.772181  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:00.795971  381291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:00.898970  381291 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:45:00.902632  381291 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:45:00.902666  381291 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:45:00.902677  381291 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:45:00.902723  381291 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:45:00.902835  381291 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:45:00.902964  381291 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:45:00.910995  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:00.933080  381291 start.go:296] duration metric: took 161.017858ms for postStartSetup
	I1018 09:45:00.933429  381291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-708733
	I1018 09:45:00.952417  381291 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/config.json ...
	I1018 09:45:00.953438  381291 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:45:00.953481  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:00.972137  381291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:01.066959  381291 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:45:01.071412  381291 start.go:128] duration metric: took 10.178935412s to createHost
	I1018 09:45:01.071434  381291 start.go:83] releasing machines lock for "newest-cni-708733", held for 10.179088829s
	I1018 09:45:01.071491  381291 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-708733
	I1018 09:45:01.088634  381291 ssh_runner.go:195] Run: cat /version.json
	I1018 09:45:01.088695  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:01.088695  381291 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:45:01.088786  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:01.112801  381291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:01.113918  381291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:01.276879  381291 ssh_runner.go:195] Run: systemctl --version
	I1018 09:45:01.283898  381291 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:45:01.328440  381291 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:45:01.333133  381291 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:45:01.333205  381291 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:45:01.361175  381291 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:45:01.361202  381291 start.go:495] detecting cgroup driver to use...
	I1018 09:45:01.361231  381291 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:45:01.361272  381291 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:45:01.378234  381291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:45:01.391647  381291 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:45:01.391707  381291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:45:01.409914  381291 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:45:01.429388  381291 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:45:01.526350  381291 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:45:01.630562  381291 docker.go:234] disabling docker service ...
	I1018 09:45:01.630633  381291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:45:01.653759  381291 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:45:01.667643  381291 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:45:01.778059  381291 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:45:01.887041  381291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:45:01.901660  381291 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:45:01.918988  381291 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:45:01.919052  381291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.932977  381291 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:45:01.933047  381291 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.943533  381291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.953542  381291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.963614  381291 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:45:01.972556  381291 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.982181  381291 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.996679  381291 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:02.007480  381291 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:45:02.015934  381291 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:45:02.024007  381291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:02.123050  381291 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:45:02.246923  381291 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:45:02.246995  381291 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:45:02.251619  381291 start.go:563] Will wait 60s for crictl version
	I1018 09:45:02.251683  381291 ssh_runner.go:195] Run: which crictl
	I1018 09:45:02.256150  381291 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:45:02.283457  381291 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:45:02.283534  381291 ssh_runner.go:195] Run: crio --version
	I1018 09:45:02.316271  381291 ssh_runner.go:195] Run: crio --version
	I1018 09:45:02.351268  381291 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:45:02.352768  381291 cli_runner.go:164] Run: docker network inspect newest-cni-708733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:45:02.370940  381291 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 09:45:02.376017  381291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:00.811078  381160 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:45:00.811113  381160 machine.go:96] duration metric: took 1.559383872s to provisionDockerMachine
	I1018 09:45:00.811126  381160 client.go:171] duration metric: took 10.048900106s to LocalClient.Create
	I1018 09:45:00.811151  381160 start.go:167] duration metric: took 10.048987547s to libmachine.API.Create "default-k8s-diff-port-942905"
	I1018 09:45:00.811164  381160 start.go:293] postStartSetup for "default-k8s-diff-port-942905" (driver="docker")
	I1018 09:45:00.811178  381160 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:45:00.811254  381160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:45:00.811299  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:45:00.830550  381160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:45:00.928438  381160 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:45:00.931979  381160 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:45:00.932011  381160 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:45:00.932023  381160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:45:00.932073  381160 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:45:00.932183  381160 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:45:00.932322  381160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:45:00.940162  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:00.960740  381160 start.go:296] duration metric: took 149.561722ms for postStartSetup
	I1018 09:45:00.961086  381160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-942905
	I1018 09:45:00.979805  381160 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/config.json ...
	I1018 09:45:00.980166  381160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:45:00.980207  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:45:00.997884  381160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:45:01.093448  381160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:45:01.104589  381160 start.go:128] duration metric: took 10.348071861s to createHost
	I1018 09:45:01.104621  381160 start.go:83] releasing machines lock for "default-k8s-diff-port-942905", held for 10.348219433s
	I1018 09:45:01.104710  381160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-942905
	I1018 09:45:01.127607  381160 ssh_runner.go:195] Run: cat /version.json
	I1018 09:45:01.127676  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:45:01.127704  381160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:45:01.127778  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:45:01.150611  381160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:45:01.154699  381160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:45:01.253321  381160 ssh_runner.go:195] Run: systemctl --version
	I1018 09:45:01.326956  381160 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:45:01.363983  381160 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:45:01.368694  381160 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:45:01.368747  381160 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:45:01.397138  381160 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:45:01.397161  381160 start.go:495] detecting cgroup driver to use...
	I1018 09:45:01.397192  381160 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:45:01.397237  381160 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:45:01.413222  381160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:45:01.426074  381160 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:45:01.426124  381160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:45:01.444782  381160 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:45:01.468099  381160 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:45:01.562373  381160 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:45:01.672401  381160 docker.go:234] disabling docker service ...
	I1018 09:45:01.672469  381160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:45:01.694710  381160 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:45:01.714252  381160 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:45:01.829193  381160 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:45:01.931303  381160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:45:01.946887  381160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:45:01.962333  381160 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:45:01.962397  381160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.973621  381160 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:45:01.973690  381160 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.983444  381160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:01.992651  381160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:02.003641  381160 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:45:02.013656  381160 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:02.023401  381160 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:02.039670  381160 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:02.049256  381160 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:45:02.064093  381160 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:45:02.073727  381160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:02.172619  381160 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:45:02.289299  381160 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:45:02.289388  381160 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:45:02.293813  381160 start.go:563] Will wait 60s for crictl version
	I1018 09:45:02.293900  381160 ssh_runner.go:195] Run: which crictl
	I1018 09:45:02.297617  381160 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:45:02.327297  381160 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:45:02.327375  381160 ssh_runner.go:195] Run: crio --version
	I1018 09:45:02.357910  381160 ssh_runner.go:195] Run: crio --version
	I1018 09:45:02.390061  381291 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 09:45:02.390874  381160 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1018 09:44:59.191088  373771 node_ready.go:57] node "embed-certs-055175" has "Ready":"False" status (will retry)
	I1018 09:45:01.191208  373771 node_ready.go:49] node "embed-certs-055175" is "Ready"
	I1018 09:45:01.191253  373771 node_ready.go:38] duration metric: took 11.00402594s for node "embed-certs-055175" to be "Ready" ...
	I1018 09:45:01.191272  373771 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:45:01.191356  373771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:45:01.205763  373771 api_server.go:72] duration metric: took 11.31550879s to wait for apiserver process to appear ...
	I1018 09:45:01.205805  373771 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:45:01.205851  373771 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:45:01.210749  373771 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:45:01.211633  373771 api_server.go:141] control plane version: v1.34.1
	I1018 09:45:01.211659  373771 api_server.go:131] duration metric: took 5.845331ms to wait for apiserver health ...
	I1018 09:45:01.211670  373771 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:45:01.215349  373771 system_pods.go:59] 8 kube-system pods found
	I1018 09:45:01.215380  373771 system_pods.go:61] "coredns-66bc5c9577-ksdf9" [ba2449a3-fc94-49e2-9e00-868003d349b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:45:01.215386  373771 system_pods.go:61] "etcd-embed-certs-055175" [acbafcab-4332-412c-9a28-07d9c4f5d5d1] Running
	I1018 09:45:01.215393  373771 system_pods.go:61] "kindnet-tntfx" [f7f70a88-1903-43e5-a76f-2206c4e3df79] Running
	I1018 09:45:01.215397  373771 system_pods.go:61] "kube-apiserver-embed-certs-055175" [106af05d-a1f9-4283-a87f-7ac003f31fc7] Running
	I1018 09:45:01.215405  373771 system_pods.go:61] "kube-controller-manager-embed-certs-055175" [c2698b20-d1d6-4737-a292-7eef1978c79b] Running
	I1018 09:45:01.215408  373771 system_pods.go:61] "kube-proxy-9n98q" [5c9c0f79-f699-4305-8423-c0863f443b78] Running
	I1018 09:45:01.215411  373771 system_pods.go:61] "kube-scheduler-embed-certs-055175" [7b537fc9-c879-44cc-95e6-35fb0dcc566a] Running
	I1018 09:45:01.215416  373771 system_pods.go:61] "storage-provisioner" [1d121276-430c-41af-a2b6-542d426c43dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:45:01.215426  373771 system_pods.go:74] duration metric: took 3.750342ms to wait for pod list to return data ...
	I1018 09:45:01.215436  373771 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:45:01.217968  373771 default_sa.go:45] found service account: "default"
	I1018 09:45:01.217991  373771 default_sa.go:55] duration metric: took 2.548354ms for default service account to be created ...
	I1018 09:45:01.218001  373771 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:45:01.220282  373771 system_pods.go:86] 8 kube-system pods found
	I1018 09:45:01.220312  373771 system_pods.go:89] "coredns-66bc5c9577-ksdf9" [ba2449a3-fc94-49e2-9e00-868003d349b1] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:45:01.220319  373771 system_pods.go:89] "etcd-embed-certs-055175" [acbafcab-4332-412c-9a28-07d9c4f5d5d1] Running
	I1018 09:45:01.220327  373771 system_pods.go:89] "kindnet-tntfx" [f7f70a88-1903-43e5-a76f-2206c4e3df79] Running
	I1018 09:45:01.220333  373771 system_pods.go:89] "kube-apiserver-embed-certs-055175" [106af05d-a1f9-4283-a87f-7ac003f31fc7] Running
	I1018 09:45:01.220340  373771 system_pods.go:89] "kube-controller-manager-embed-certs-055175" [c2698b20-d1d6-4737-a292-7eef1978c79b] Running
	I1018 09:45:01.220345  373771 system_pods.go:89] "kube-proxy-9n98q" [5c9c0f79-f699-4305-8423-c0863f443b78] Running
	I1018 09:45:01.220351  373771 system_pods.go:89] "kube-scheduler-embed-certs-055175" [7b537fc9-c879-44cc-95e6-35fb0dcc566a] Running
	I1018 09:45:01.220358  373771 system_pods.go:89] "storage-provisioner" [1d121276-430c-41af-a2b6-542d426c43dc] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:45:01.220400  373771 retry.go:31] will retry after 292.027072ms: missing components: kube-dns
	I1018 09:45:01.517165  373771 system_pods.go:86] 8 kube-system pods found
	I1018 09:45:01.517195  373771 system_pods.go:89] "coredns-66bc5c9577-ksdf9" [ba2449a3-fc94-49e2-9e00-868003d349b1] Running
	I1018 09:45:01.517200  373771 system_pods.go:89] "etcd-embed-certs-055175" [acbafcab-4332-412c-9a28-07d9c4f5d5d1] Running
	I1018 09:45:01.517203  373771 system_pods.go:89] "kindnet-tntfx" [f7f70a88-1903-43e5-a76f-2206c4e3df79] Running
	I1018 09:45:01.517208  373771 system_pods.go:89] "kube-apiserver-embed-certs-055175" [106af05d-a1f9-4283-a87f-7ac003f31fc7] Running
	I1018 09:45:01.517212  373771 system_pods.go:89] "kube-controller-manager-embed-certs-055175" [c2698b20-d1d6-4737-a292-7eef1978c79b] Running
	I1018 09:45:01.517215  373771 system_pods.go:89] "kube-proxy-9n98q" [5c9c0f79-f699-4305-8423-c0863f443b78] Running
	I1018 09:45:01.517218  373771 system_pods.go:89] "kube-scheduler-embed-certs-055175" [7b537fc9-c879-44cc-95e6-35fb0dcc566a] Running
	I1018 09:45:01.517221  373771 system_pods.go:89] "storage-provisioner" [1d121276-430c-41af-a2b6-542d426c43dc] Running
	I1018 09:45:01.517228  373771 system_pods.go:126] duration metric: took 299.220385ms to wait for k8s-apps to be running ...
	I1018 09:45:01.517235  373771 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:45:01.517278  373771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:45:01.530674  373771 system_svc.go:56] duration metric: took 13.426605ms WaitForService to wait for kubelet
	I1018 09:45:01.530709  373771 kubeadm.go:586] duration metric: took 11.640461228s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:45:01.530731  373771 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:45:01.534308  373771 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:45:01.534331  373771 node_conditions.go:123] node cpu capacity is 8
	I1018 09:45:01.534354  373771 node_conditions.go:105] duration metric: took 3.608017ms to run NodePressure ...
	I1018 09:45:01.534369  373771 start.go:241] waiting for startup goroutines ...
	I1018 09:45:01.534378  373771 start.go:246] waiting for cluster config update ...
	I1018 09:45:01.534387  373771 start.go:255] writing updated cluster config ...
	I1018 09:45:01.534640  373771 ssh_runner.go:195] Run: rm -f paused
	I1018 09:45:01.538546  373771 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:45:01.542230  373771 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ksdf9" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:01.547256  373771 pod_ready.go:94] pod "coredns-66bc5c9577-ksdf9" is "Ready"
	I1018 09:45:01.547284  373771 pod_ready.go:86] duration metric: took 5.031552ms for pod "coredns-66bc5c9577-ksdf9" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:01.549343  373771 pod_ready.go:83] waiting for pod "etcd-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:01.553589  373771 pod_ready.go:94] pod "etcd-embed-certs-055175" is "Ready"
	I1018 09:45:01.553619  373771 pod_ready.go:86] duration metric: took 4.251109ms for pod "etcd-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:01.555860  373771 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:01.560049  373771 pod_ready.go:94] pod "kube-apiserver-embed-certs-055175" is "Ready"
	I1018 09:45:01.560072  373771 pod_ready.go:86] duration metric: took 4.189026ms for pod "kube-apiserver-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:01.562507  373771 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:01.943358  373771 pod_ready.go:94] pod "kube-controller-manager-embed-certs-055175" is "Ready"
	I1018 09:45:01.943391  373771 pod_ready.go:86] duration metric: took 380.861522ms for pod "kube-controller-manager-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:02.143523  373771 pod_ready.go:83] waiting for pod "kube-proxy-9n98q" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:02.542665  373771 pod_ready.go:94] pod "kube-proxy-9n98q" is "Ready"
	I1018 09:45:02.542690  373771 pod_ready.go:86] duration metric: took 399.136576ms for pod "kube-proxy-9n98q" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:02.743469  373771 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:03.143247  373771 pod_ready.go:94] pod "kube-scheduler-embed-certs-055175" is "Ready"
	I1018 09:45:03.143279  373771 pod_ready.go:86] duration metric: took 399.784483ms for pod "kube-scheduler-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:45:03.143292  373771 pod_ready.go:40] duration metric: took 1.604710305s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:45:03.189033  373771 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:45:03.191585  373771 out.go:179] * Done! kubectl is now configured to use "embed-certs-055175" cluster and "default" namespace by default
	I1018 09:44:58.655042  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:44:58.655074  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:44:58.688346  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:44:58.688380  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:44:58.750658  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:44:58.750699  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:44:58.785634  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:44:58.785664  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:44:58.933097  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:44:58.933133  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:44:58.959738  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:44:58.959770  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:44:59.060360  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:44:59.060387  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:44:59.060404  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:01.607920  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:45:01.608518  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:45:01.608589  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:45:01.608650  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:45:01.644374  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:01.644398  353123 cri.go:89] found id: ""
	I1018 09:45:01.644410  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:45:01.644472  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:01.649392  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:45:01.649465  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:45:01.678954  353123 cri.go:89] found id: ""
	I1018 09:45:01.678983  353123 logs.go:282] 0 containers: []
	W1018 09:45:01.678994  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:45:01.679005  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:45:01.679068  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:45:01.715079  353123 cri.go:89] found id: ""
	I1018 09:45:01.715110  353123 logs.go:282] 0 containers: []
	W1018 09:45:01.715121  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:45:01.715129  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:45:01.715191  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:45:01.743578  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:01.743613  353123 cri.go:89] found id: ""
	I1018 09:45:01.743624  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:45:01.743685  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:01.749121  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:45:01.749204  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:45:01.781635  353123 cri.go:89] found id: ""
	I1018 09:45:01.781663  353123 logs.go:282] 0 containers: []
	W1018 09:45:01.781673  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:45:01.781681  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:45:01.781748  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:45:01.811864  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:01.811891  353123 cri.go:89] found id: ""
	I1018 09:45:01.811903  353123 logs.go:282] 1 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:45:01.811969  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:01.819899  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:45:01.820044  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:45:01.851980  353123 cri.go:89] found id: ""
	I1018 09:45:01.852008  353123 logs.go:282] 0 containers: []
	W1018 09:45:01.852023  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:45:01.852031  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:45:01.852100  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:45:01.889801  353123 cri.go:89] found id: ""
	I1018 09:45:01.889843  353123 logs.go:282] 0 containers: []
	W1018 09:45:01.889857  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:45:01.889868  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:01.889883  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:01.920012  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:01.920038  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:45:01.973711  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:45:01.973741  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:45:02.007947  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:02.007976  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:02.112317  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:45:02.112352  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:45:02.132749  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:45:02.132780  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:45:02.204788  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:45:02.204809  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:45:02.204841  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:02.243306  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:02.243346  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:02.392601  381160 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-942905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:45:02.411071  381160 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 09:45:02.415686  381160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:02.428058  381160 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-942905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-942905 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:45:02.428202  381160 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:45:02.428261  381160 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:02.462671  381160 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:02.462692  381160 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:45:02.462737  381160 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:02.491340  381160 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:02.491365  381160 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:45:02.491373  381160 kubeadm.go:934] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1018 09:45:02.491453  381160 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-942905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-942905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:45:02.491513  381160 ssh_runner.go:195] Run: crio config
	I1018 09:45:02.539294  381160 cni.go:84] Creating CNI manager for ""
	I1018 09:45:02.539317  381160 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:45:02.539340  381160 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:45:02.539361  381160 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-942905 NodeName:default-k8s-diff-port-942905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:45:02.539485  381160 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-942905"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:45:02.539544  381160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:45:02.548244  381160 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:45:02.548311  381160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:45:02.556665  381160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 09:45:02.569912  381160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:45:02.585539  381160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1018 09:45:02.599103  381160 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:45:02.603152  381160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:02.616312  381160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:02.702091  381160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:45:02.733016  381160 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905 for IP: 192.168.94.2
	I1018 09:45:02.733040  381160 certs.go:195] generating shared ca certs ...
	I1018 09:45:02.733060  381160 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:02.733237  381160 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:45:02.733279  381160 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:45:02.733289  381160 certs.go:257] generating profile certs ...
	I1018 09:45:02.733342  381160 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/client.key
	I1018 09:45:02.733362  381160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/client.crt with IP's: []
	I1018 09:45:03.027373  381160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/client.crt ...
	I1018 09:45:03.027397  381160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/client.crt: {Name:mk981af9917b6ac92974b225166ec0395d71372f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.027562  381160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/client.key ...
	I1018 09:45:03.027582  381160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/client.key: {Name:mkd2ccf0788c296cb00266f87e9a3f936c6bb097 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.027707  381160 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.key.cb5a57ca
	I1018 09:45:03.027732  381160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.crt.cb5a57ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1018 09:45:03.455977  381160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.crt.cb5a57ca ...
	I1018 09:45:03.456007  381160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.crt.cb5a57ca: {Name:mk2889a394c4a49479ba0dac8a102927df330339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.456154  381160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.key.cb5a57ca ...
	I1018 09:45:03.456166  381160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.key.cb5a57ca: {Name:mk5a06293ffa6e89403afb34f76f87cc2a90226d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.456241  381160 certs.go:382] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.crt.cb5a57ca -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.crt
	I1018 09:45:03.456326  381160 certs.go:386] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.key.cb5a57ca -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.key
	I1018 09:45:03.456393  381160 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.key
	I1018 09:45:03.456410  381160 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.crt with IP's: []
	I1018 09:45:03.745412  381160 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.crt ...
	I1018 09:45:03.745442  381160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.crt: {Name:mk3e7ea9bc969efb2a6fa264abfdc7649bac7488 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.745615  381160 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.key ...
	I1018 09:45:03.745629  381160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.key: {Name:mk9070cc1f6e0ec8f11fe644828ed9f3eab55e76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.745795  381160 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:45:03.745853  381160 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:45:03.745864  381160 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:45:03.745885  381160 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:45:03.745906  381160 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:45:03.745927  381160 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:45:03.745965  381160 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:03.746557  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:45:03.766750  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:45:03.785188  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:45:03.803488  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:45:03.822158  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 09:45:03.841444  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:45:03.861945  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:45:03.880231  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:45:03.899112  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:45:03.919492  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:45:03.938037  381160 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:45:03.956161  381160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:45:03.969096  381160 ssh_runner.go:195] Run: openssl version
	I1018 09:45:03.976951  381160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:45:03.985944  381160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:03.990631  381160 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:03.990698  381160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:04.030208  381160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:45:04.039727  381160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:45:04.049165  381160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:45:04.053153  381160 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:45:04.053225  381160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:45:04.094573  381160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:45:04.105540  381160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:45:04.115460  381160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:45:04.119424  381160 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:45:04.119486  381160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:45:04.154663  381160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:45:04.163890  381160 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:45:04.167557  381160 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:45:04.167618  381160 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-942905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-942905 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:45:04.167716  381160 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:45:04.167770  381160 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:45:04.196236  381160 cri.go:89] found id: ""
	I1018 09:45:04.196321  381160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:45:04.205172  381160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:45:04.213562  381160 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:45:04.213646  381160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:45:04.221974  381160 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:45:04.221990  381160 kubeadm.go:157] found existing configuration files:
	
	I1018 09:45:04.222039  381160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1018 09:45:04.229745  381160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:45:04.229812  381160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:45:04.238298  381160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1018 09:45:04.246658  381160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:45:04.246716  381160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:45:04.255972  381160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1018 09:45:04.264031  381160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:45:04.264087  381160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:45:04.272422  381160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1018 09:45:04.280843  381160 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:45:04.280903  381160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:45:04.288575  381160 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:45:04.360700  381160 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:45:04.428420  381160 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:45:02.391762  381291 kubeadm.go:883] updating cluster {Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:45:02.391993  381291 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:45:02.392084  381291 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:02.425187  381291 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:02.425210  381291 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:45:02.425255  381291 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:02.454521  381291 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:02.454553  381291 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:45:02.454563  381291 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 09:45:02.454690  381291 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-708733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:45:02.454778  381291 ssh_runner.go:195] Run: crio config
	I1018 09:45:02.503782  381291 cni.go:84] Creating CNI manager for ""
	I1018 09:45:02.503810  381291 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:45:02.503856  381291 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 09:45:02.503896  381291 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-708733 NodeName:newest-cni-708733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:45:02.504052  381291 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-708733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:45:02.504122  381291 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:45:02.513289  381291 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:45:02.513358  381291 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:45:02.521575  381291 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:45:02.534678  381291 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:45:02.552132  381291 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:45:02.565736  381291 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:45:02.569535  381291 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:02.579949  381291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:02.671930  381291 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:45:02.696682  381291 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733 for IP: 192.168.103.2
	I1018 09:45:02.696707  381291 certs.go:195] generating shared ca certs ...
	I1018 09:45:02.696739  381291 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:02.696961  381291 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:45:02.697030  381291 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:45:02.697046  381291 certs.go:257] generating profile certs ...
	I1018 09:45:02.697127  381291 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/client.key
	I1018 09:45:02.697158  381291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/client.crt with IP's: []
	I1018 09:45:03.021940  381291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/client.crt ...
	I1018 09:45:03.021971  381291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/client.crt: {Name:mk34305844f07bbce4828aa11fbd8babaff65d58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.022156  381291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/client.key ...
	I1018 09:45:03.022167  381291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/client.key: {Name:mk4f3f93ab07dd49c2ff8ec3a1448251b4cac3b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.022246  381291 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key.ffa152cd
	I1018 09:45:03.022263  381291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.crt.ffa152cd with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1018 09:45:03.175129  381291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.crt.ffa152cd ...
	I1018 09:45:03.175158  381291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.crt.ffa152cd: {Name:mkc25ea49b370be29f02b5a8660805e0ac00d4c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.175332  381291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key.ffa152cd ...
	I1018 09:45:03.175346  381291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key.ffa152cd: {Name:mkf39e3929d7202cc4a55decf0767b42ac2055df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.175418  381291 certs.go:382] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.crt.ffa152cd -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.crt
	I1018 09:45:03.175509  381291 certs.go:386] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key.ffa152cd -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key
	I1018 09:45:03.175572  381291 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.key
	I1018 09:45:03.175596  381291 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.crt with IP's: []
	I1018 09:45:03.410225  381291 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.crt ...
	I1018 09:45:03.410260  381291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.crt: {Name:mk912f810ad1a80c75b05b8385bdc60578025312 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.410467  381291 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.key ...
	I1018 09:45:03.410486  381291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.key: {Name:mkb65103205eaab03d8160e628125e95f2c1c9cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:03.410723  381291 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:45:03.410761  381291 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:45:03.410772  381291 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:45:03.410800  381291 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:45:03.410837  381291 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:45:03.410871  381291 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:45:03.410920  381291 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:03.411482  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:45:03.431424  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:45:03.449495  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:45:03.467299  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:45:03.485023  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:45:03.502535  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:45:03.520275  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:45:03.538841  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:45:03.556743  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:45:03.576835  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:45:03.594737  381291 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:45:03.612576  381291 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:45:03.625282  381291 ssh_runner.go:195] Run: openssl version
	I1018 09:45:03.631505  381291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:45:03.641420  381291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:03.646811  381291 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:03.646891  381291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:03.695322  381291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:45:03.704814  381291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:45:03.713988  381291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:45:03.718050  381291 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:45:03.718128  381291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:45:03.753040  381291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:45:03.762142  381291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:45:03.771213  381291 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:45:03.775498  381291 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:45:03.775558  381291 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:45:03.812378  381291 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:45:03.821406  381291 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:45:03.825731  381291 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:45:03.825796  381291 kubeadm.go:400] StartCluster: {Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:45:03.825918  381291 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:45:03.825995  381291 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:45:03.857069  381291 cri.go:89] found id: ""
	I1018 09:45:03.857136  381291 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:45:03.865207  381291 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:45:03.872979  381291 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:45:03.873033  381291 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:45:03.881090  381291 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:45:03.881108  381291 kubeadm.go:157] found existing configuration files:
	
	I1018 09:45:03.881154  381291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:45:03.889156  381291 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:45:03.889220  381291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:45:03.897045  381291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:45:03.905216  381291 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:45:03.905300  381291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:45:03.912819  381291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:45:03.920656  381291 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:45:03.920714  381291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:45:03.928538  381291 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:45:03.936379  381291 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:45:03.936437  381291 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:45:03.944237  381291 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:45:04.009498  381291 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:45:04.083257  381291 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:45:04.801345  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:45:04.801714  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:45:04.801769  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:45:04.801851  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:45:04.829837  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:04.829861  353123 cri.go:89] found id: ""
	I1018 09:45:04.829878  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:45:04.829947  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:04.834147  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:45:04.834225  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:45:04.866567  353123 cri.go:89] found id: ""
	I1018 09:45:04.866602  353123 logs.go:282] 0 containers: []
	W1018 09:45:04.866613  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:45:04.866620  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:45:04.866680  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:45:04.897462  353123 cri.go:89] found id: ""
	I1018 09:45:04.897493  353123 logs.go:282] 0 containers: []
	W1018 09:45:04.897505  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:45:04.897513  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:45:04.897579  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:45:04.929017  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:04.929042  353123 cri.go:89] found id: ""
	I1018 09:45:04.929052  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:45:04.929113  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:04.933633  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:45:04.933703  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:45:04.962539  353123 cri.go:89] found id: ""
	I1018 09:45:04.962572  353123 logs.go:282] 0 containers: []
	W1018 09:45:04.962583  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:45:04.962590  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:45:04.962645  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:45:04.993181  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:04.993205  353123 cri.go:89] found id: ""
	I1018 09:45:04.993214  353123 logs.go:282] 1 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:45:04.993272  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:04.997428  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:45:04.997550  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:45:05.028000  353123 cri.go:89] found id: ""
	I1018 09:45:05.028029  353123 logs.go:282] 0 containers: []
	W1018 09:45:05.028041  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:45:05.028049  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:45:05.028104  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:45:05.055921  353123 cri.go:89] found id: ""
	I1018 09:45:05.055951  353123 logs.go:282] 0 containers: []
	W1018 09:45:05.055962  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:45:05.055974  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:05.055988  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:45:05.102239  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:45:05.102275  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:45:05.134806  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:05.134860  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:05.248706  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:45:05.248744  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:45:05.268497  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:45:05.268527  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:45:05.334870  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:45:05.334893  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:45:05.334912  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:05.367588  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:05.367621  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:05.428601  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:05.428641  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:07.959155  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:45:07.959656  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:45:07.959714  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:45:07.959770  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:45:07.987228  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:07.987248  353123 cri.go:89] found id: ""
	I1018 09:45:07.987256  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:45:07.987311  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:07.991349  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:45:07.991416  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:45:08.019893  353123 cri.go:89] found id: ""
	I1018 09:45:08.019922  353123 logs.go:282] 0 containers: []
	W1018 09:45:08.019932  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:45:08.019950  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:45:08.020007  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:45:08.050180  353123 cri.go:89] found id: ""
	I1018 09:45:08.050208  353123 logs.go:282] 0 containers: []
	W1018 09:45:08.050220  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:45:08.050229  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:45:08.050295  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:45:08.089285  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:08.089310  353123 cri.go:89] found id: ""
	I1018 09:45:08.089321  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:45:08.089389  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:08.093682  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:45:08.093751  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:45:08.123444  353123 cri.go:89] found id: ""
	I1018 09:45:08.123472  353123 logs.go:282] 0 containers: []
	W1018 09:45:08.123484  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:45:08.123503  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:45:08.123649  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:45:08.153159  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:08.153189  353123 cri.go:89] found id: ""
	I1018 09:45:08.153200  353123 logs.go:282] 1 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:45:08.153263  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:08.157466  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:45:08.157556  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:45:08.189498  353123 cri.go:89] found id: ""
	I1018 09:45:08.189531  353123 logs.go:282] 0 containers: []
	W1018 09:45:08.189542  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:45:08.189554  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:45:08.189639  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:45:08.220603  353123 cri.go:89] found id: ""
	I1018 09:45:08.220634  353123 logs.go:282] 0 containers: []
	W1018 09:45:08.220646  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:45:08.220657  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:45:08.220670  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:45:08.262810  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:08.262863  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:08.369896  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:45:08.369934  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:45:08.390711  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:45:08.390742  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:45:08.448636  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:45:08.448666  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:45:08.448680  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:08.482980  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:08.483011  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:08.532288  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:08.532326  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:08.560227  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:08.560254  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:45:13.623333  381160 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:45:13.623426  381160 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:45:13.623590  381160 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:45:13.623689  381160 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:45:13.623745  381160 kubeadm.go:318] OS: Linux
	I1018 09:45:13.623807  381160 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:45:13.623910  381160 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:45:13.623961  381160 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:45:13.624041  381160 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:45:13.624120  381160 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:45:13.624192  381160 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:45:13.624287  381160 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:45:13.624351  381160 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 09:45:13.624446  381160 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:45:13.624563  381160 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:45:13.624694  381160 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:45:13.624794  381160 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:45:13.626462  381160 out.go:252]   - Generating certificates and keys ...
	I1018 09:45:13.626581  381160 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:45:13.626686  381160 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:45:13.626787  381160 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:45:13.626892  381160 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:45:13.626981  381160 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:45:13.627065  381160 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:45:13.627148  381160 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:45:13.627344  381160 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-942905 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 09:45:13.627435  381160 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:45:13.627631  381160 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-942905 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1018 09:45:13.627730  381160 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:45:13.627842  381160 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:45:13.627919  381160 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:45:13.628011  381160 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:45:13.628075  381160 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:45:13.628158  381160 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:45:13.628225  381160 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:45:13.628320  381160 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:45:13.628395  381160 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:45:13.628548  381160 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:45:13.628650  381160 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:45:13.629949  381160 out.go:252]   - Booting up control plane ...
	I1018 09:45:13.630050  381160 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:45:13.630140  381160 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:45:13.630232  381160 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:45:13.630322  381160 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:45:13.630418  381160 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:45:13.630550  381160 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:45:13.630689  381160 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:45:13.630746  381160 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:45:13.630928  381160 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:45:13.631079  381160 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:45:13.631164  381160 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001848222s
	I1018 09:45:13.631296  381160 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:45:13.631402  381160 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1018 09:45:13.631548  381160 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:45:13.631695  381160 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:45:13.631809  381160 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.513817245s
	I1018 09:45:13.632007  381160 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.918189549s
	I1018 09:45:13.632123  381160 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501519656s
	I1018 09:45:13.632274  381160 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:45:13.632453  381160 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:45:13.632565  381160 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:45:13.632923  381160 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-942905 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:45:13.633007  381160 kubeadm.go:318] [bootstrap-token] Using token: yxk4qh.tvooldc23v7gryvn
	I1018 09:45:13.634351  381160 out.go:252]   - Configuring RBAC rules ...
	I1018 09:45:13.634491  381160 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:45:13.634638  381160 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:45:13.634872  381160 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:45:13.635082  381160 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:45:13.635232  381160 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:45:13.635336  381160 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:45:13.635479  381160 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:45:13.635535  381160 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:45:13.635628  381160 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:45:13.635636  381160 kubeadm.go:318] 
	I1018 09:45:13.635709  381160 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:45:13.635718  381160 kubeadm.go:318] 
	I1018 09:45:13.635838  381160 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:45:13.635849  381160 kubeadm.go:318] 
	I1018 09:45:13.635880  381160 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:45:13.635960  381160 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:45:13.636046  381160 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:45:13.636060  381160 kubeadm.go:318] 
	I1018 09:45:13.636147  381160 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:45:13.636160  381160 kubeadm.go:318] 
	I1018 09:45:13.636229  381160 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:45:13.636242  381160 kubeadm.go:318] 
	I1018 09:45:13.636323  381160 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:45:13.636459  381160 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:45:13.636539  381160 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:45:13.636545  381160 kubeadm.go:318] 
	I1018 09:45:13.636620  381160 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:45:13.636709  381160 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:45:13.636745  381160 kubeadm.go:318] 
	I1018 09:45:13.636935  381160 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token yxk4qh.tvooldc23v7gryvn \
	I1018 09:45:13.637103  381160 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f \
	I1018 09:45:13.637144  381160 kubeadm.go:318] 	--control-plane 
	I1018 09:45:13.637153  381160 kubeadm.go:318] 
	I1018 09:45:13.637284  381160 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:45:13.637305  381160 kubeadm.go:318] 
	I1018 09:45:13.637437  381160 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token yxk4qh.tvooldc23v7gryvn \
	I1018 09:45:13.637618  381160 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f 
	I1018 09:45:13.637642  381160 cni.go:84] Creating CNI manager for ""
	I1018 09:45:13.637654  381160 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:45:13.639200  381160 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 09:45:11.108896  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:45:11.109308  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:45:11.109357  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:45:11.109403  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:45:11.160093  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:11.160115  353123 cri.go:89] found id: ""
	I1018 09:45:11.160151  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:45:11.160219  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:11.165630  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:45:11.165706  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:45:11.210308  353123 cri.go:89] found id: ""
	I1018 09:45:11.210338  353123 logs.go:282] 0 containers: []
	W1018 09:45:11.210351  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:45:11.210359  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:45:11.210417  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:45:11.246137  353123 cri.go:89] found id: ""
	I1018 09:45:11.246167  353123 logs.go:282] 0 containers: []
	W1018 09:45:11.246178  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:45:11.246186  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:45:11.246245  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:45:11.281102  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:11.281132  353123 cri.go:89] found id: ""
	I1018 09:45:11.281143  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:45:11.281208  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:11.285303  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:45:11.285371  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:45:11.319163  353123 cri.go:89] found id: ""
	I1018 09:45:11.319198  353123 logs.go:282] 0 containers: []
	W1018 09:45:11.319210  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:45:11.319218  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:45:11.319279  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:45:11.363611  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:11.363638  353123 cri.go:89] found id: ""
	I1018 09:45:11.363648  353123 logs.go:282] 1 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:45:11.363870  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:11.370458  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:45:11.370532  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:45:11.407759  353123 cri.go:89] found id: ""
	I1018 09:45:11.407859  353123 logs.go:282] 0 containers: []
	W1018 09:45:11.407873  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:45:11.407883  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:45:11.407971  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:45:11.444234  353123 cri.go:89] found id: ""
	I1018 09:45:11.444268  353123 logs.go:282] 0 containers: []
	W1018 09:45:11.444280  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:45:11.444292  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:45:11.444310  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:45:11.468742  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:45:11.468792  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:45:11.539991  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:45:11.540017  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:45:11.540036  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:11.584059  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:11.584090  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:11.647619  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:11.647664  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:11.681316  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:11.681357  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:45:11.750785  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:45:11.750842  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:45:11.792254  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:11.792295  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:14.567096  381291 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:45:14.567174  381291 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:45:14.567327  381291 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:45:14.567418  381291 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:45:14.567472  381291 kubeadm.go:318] OS: Linux
	I1018 09:45:14.567541  381291 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:45:14.567619  381291 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:45:14.567704  381291 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:45:14.567772  381291 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:45:14.567841  381291 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:45:14.567913  381291 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:45:14.567980  381291 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:45:14.568058  381291 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 09:45:14.568171  381291 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:45:14.568269  381291 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:45:14.568389  381291 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:45:14.568485  381291 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:45:14.570895  381291 out.go:252]   - Generating certificates and keys ...
	I1018 09:45:14.570991  381291 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:45:14.571069  381291 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:45:14.571184  381291 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:45:14.571277  381291 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:45:14.571372  381291 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:45:14.571447  381291 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:45:14.571556  381291 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:45:14.571762  381291 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-708733] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1018 09:45:14.571853  381291 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:45:14.572037  381291 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-708733] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1018 09:45:14.572155  381291 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:45:14.572257  381291 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:45:14.572332  381291 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:45:14.572423  381291 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:45:14.572494  381291 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:45:14.572587  381291 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:45:14.572664  381291 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:45:14.572792  381291 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:45:14.572906  381291 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:45:14.573015  381291 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:45:14.573112  381291 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:45:14.575727  381291 out.go:252]   - Booting up control plane ...
	I1018 09:45:14.575852  381291 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:45:14.575947  381291 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:45:14.576022  381291 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:45:14.576158  381291 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:45:14.576284  381291 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:45:14.576418  381291 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:45:14.576503  381291 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:45:14.576544  381291 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:45:14.576657  381291 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:45:14.576779  381291 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:45:14.576870  381291 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000826657s
	I1018 09:45:14.577010  381291 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:45:14.577141  381291 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1018 09:45:14.577272  381291 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:45:14.577395  381291 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:45:14.577518  381291 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.407273256s
	I1018 09:45:14.577616  381291 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.792897104s
	I1018 09:45:14.577706  381291 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.00174213s
	I1018 09:45:14.577881  381291 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:45:14.578053  381291 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:45:14.578158  381291 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:45:14.578384  381291 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-708733 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:45:14.578441  381291 kubeadm.go:318] [bootstrap-token] Using token: ii18s7.hvv5v1lqygevdwel
	I1018 09:45:14.579653  381291 out.go:252]   - Configuring RBAC rules ...
	I1018 09:45:14.579759  381291 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:45:14.579909  381291 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:45:14.580112  381291 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:45:14.580280  381291 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:45:14.580437  381291 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:45:14.580554  381291 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:45:14.580717  381291 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:45:14.580787  381291 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:45:14.580873  381291 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:45:14.580883  381291 kubeadm.go:318] 
	I1018 09:45:14.580971  381291 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:45:14.580982  381291 kubeadm.go:318] 
	I1018 09:45:14.581112  381291 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:45:14.581130  381291 kubeadm.go:318] 
	I1018 09:45:14.581170  381291 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:45:14.581255  381291 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:45:14.581326  381291 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:45:14.581337  381291 kubeadm.go:318] 
	I1018 09:45:14.581417  381291 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:45:14.581432  381291 kubeadm.go:318] 
	I1018 09:45:14.581492  381291 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:45:14.581501  381291 kubeadm.go:318] 
	I1018 09:45:14.581579  381291 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:45:14.581691  381291 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:45:14.581795  381291 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:45:14.581804  381291 kubeadm.go:318] 
	I1018 09:45:14.581937  381291 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:45:14.582054  381291 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:45:14.582065  381291 kubeadm.go:318] 
	I1018 09:45:14.582169  381291 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ii18s7.hvv5v1lqygevdwel \
	I1018 09:45:14.582324  381291 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f \
	I1018 09:45:14.582359  381291 kubeadm.go:318] 	--control-plane 
	I1018 09:45:14.582374  381291 kubeadm.go:318] 
	I1018 09:45:14.582482  381291 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:45:14.582490  381291 kubeadm.go:318] 
	I1018 09:45:14.582600  381291 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ii18s7.hvv5v1lqygevdwel \
	I1018 09:45:14.582741  381291 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f 
	I1018 09:45:14.582759  381291 cni.go:84] Creating CNI manager for ""
	I1018 09:45:14.582769  381291 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:45:14.584931  381291 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 09:45:13.640517  381160 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:45:13.646191  381160 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:45:13.646218  381160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:45:13.663934  381160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:45:13.950416  381160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-942905 minikube.k8s.io/updated_at=2025_10_18T09_45_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=default-k8s-diff-port-942905 minikube.k8s.io/primary=true
	I1018 09:45:13.950652  381160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:45:13.950863  381160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:13.977978  381160 ops.go:34] apiserver oom_adj: -16
	I1018 09:45:14.091447  381160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:14.592432  381160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:15.091717  381160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:14.586307  381291 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:45:14.591101  381291 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:45:14.591125  381291 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:45:14.606777  381291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:45:14.843177  381291 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:45:14.843269  381291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:14.843269  381291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-708733 minikube.k8s.io/updated_at=2025_10_18T09_45_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=newest-cni-708733 minikube.k8s.io/primary=true
	I1018 09:45:14.852998  381291 ops.go:34] apiserver oom_adj: -16
	I1018 09:45:14.929229  381291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:15.429682  381291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:14.419066  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:45:14.419493  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:45:14.419546  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:45:14.419607  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:45:14.448185  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:14.448256  353123 cri.go:89] found id: ""
	I1018 09:45:14.448267  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:45:14.448328  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:14.452319  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:45:14.452374  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:45:14.480038  353123 cri.go:89] found id: ""
	I1018 09:45:14.480062  353123 logs.go:282] 0 containers: []
	W1018 09:45:14.480069  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:45:14.480076  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:45:14.480134  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:45:14.507036  353123 cri.go:89] found id: ""
	I1018 09:45:14.507065  353123 logs.go:282] 0 containers: []
	W1018 09:45:14.507075  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:45:14.507083  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:45:14.507140  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:45:14.535765  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:14.535801  353123 cri.go:89] found id: ""
	I1018 09:45:14.535812  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:45:14.535890  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:14.540254  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:45:14.540326  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:45:14.567176  353123 cri.go:89] found id: ""
	I1018 09:45:14.567199  353123 logs.go:282] 0 containers: []
	W1018 09:45:14.567213  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:45:14.567221  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:45:14.567287  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:45:14.599177  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:14.599203  353123 cri.go:89] found id: ""
	I1018 09:45:14.599214  353123 logs.go:282] 1 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:45:14.599273  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:14.603755  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:45:14.603844  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:45:14.639089  353123 cri.go:89] found id: ""
	I1018 09:45:14.639116  353123 logs.go:282] 0 containers: []
	W1018 09:45:14.639124  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:45:14.639129  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:45:14.639179  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:45:14.668984  353123 cri.go:89] found id: ""
	I1018 09:45:14.669010  353123 logs.go:282] 0 containers: []
	W1018 09:45:14.669017  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:45:14.669027  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:14.669041  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:45:14.726121  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:45:14.726157  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:45:14.758842  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:14.758874  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:14.864947  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:45:14.864991  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:45:14.891969  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:45:14.892008  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:45:14.966221  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:45:14.966247  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:45:14.966264  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:15.005335  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:15.005367  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:15.056076  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:15.056111  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:17.587915  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:45:17.588436  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:45:17.588496  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:45:17.588558  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:45:17.619412  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:17.619433  353123 cri.go:89] found id: ""
	I1018 09:45:17.619442  353123 logs.go:282] 1 containers: [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:45:17.619502  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:17.624441  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:45:17.624498  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:45:17.655983  353123 cri.go:89] found id: ""
	I1018 09:45:17.656006  353123 logs.go:282] 0 containers: []
	W1018 09:45:17.656014  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:45:17.656020  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:45:17.656066  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:45:17.685101  353123 cri.go:89] found id: ""
	I1018 09:45:17.685129  353123 logs.go:282] 0 containers: []
	W1018 09:45:17.685139  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:45:17.685147  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:45:17.685193  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:45:17.713723  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:17.713743  353123 cri.go:89] found id: ""
	I1018 09:45:17.713750  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:45:17.713819  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:17.718014  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:45:17.718096  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:45:17.746701  353123 cri.go:89] found id: ""
	I1018 09:45:17.746732  353123 logs.go:282] 0 containers: []
	W1018 09:45:17.746743  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:45:17.746751  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:45:17.746808  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:45:17.773544  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:17.773562  353123 cri.go:89] found id: ""
	I1018 09:45:17.773570  353123 logs.go:282] 1 containers: [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:45:17.773627  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:17.777920  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:45:17.777985  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:45:17.804134  353123 cri.go:89] found id: ""
	I1018 09:45:17.804157  353123 logs.go:282] 0 containers: []
	W1018 09:45:17.804166  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:45:17.804172  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:45:17.804218  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:45:17.831726  353123 cri.go:89] found id: ""
	I1018 09:45:17.831755  353123 logs.go:282] 0 containers: []
	W1018 09:45:17.831763  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:45:17.831776  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:45:17.831804  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:45:17.892580  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:45:17.892609  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:45:17.892625  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:17.923798  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:17.923840  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:17.981505  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:17.981540  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:18.011677  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:18.011702  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:45:18.062861  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:45:18.062902  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:45:18.095590  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:18.095620  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:18.194780  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:45:18.194817  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:45:15.592029  381160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:16.091947  381160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:16.591587  381160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:17.091627  381160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:17.591496  381160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:18.092073  381160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:18.592056  381160 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:18.662692  381160 kubeadm.go:1113] duration metric: took 4.712524586s to wait for elevateKubeSystemPrivileges
	I1018 09:45:18.662739  381160 kubeadm.go:402] duration metric: took 14.495125018s to StartCluster
	I1018 09:45:18.662769  381160 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:18.662872  381160 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:45:18.664495  381160 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:18.664750  381160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:45:18.664792  381160 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:45:18.664868  381160 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:45:18.664957  381160 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-942905"
	I1018 09:45:18.664977  381160 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-942905"
	I1018 09:45:18.664976  381160 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-942905"
	I1018 09:45:18.665002  381160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-942905"
	I1018 09:45:18.665012  381160 host.go:66] Checking if "default-k8s-diff-port-942905" exists ...
	I1018 09:45:18.665052  381160 config.go:182] Loaded profile config "default-k8s-diff-port-942905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:18.665344  381160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Status}}
	I1018 09:45:18.665528  381160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Status}}
	I1018 09:45:18.666226  381160 out.go:179] * Verifying Kubernetes components...
	I1018 09:45:18.667426  381160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:18.691896  381160 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:45:18.692923  381160 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-942905"
	I1018 09:45:18.692972  381160 host.go:66] Checking if "default-k8s-diff-port-942905" exists ...
	I1018 09:45:18.693359  381160 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Status}}
	I1018 09:45:18.695470  381160 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:45:18.695492  381160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:45:18.695546  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:45:18.728890  381160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:45:18.730310  381160 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:45:18.730328  381160 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:45:18.730388  381160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:45:18.754043  381160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33206 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:45:18.778092  381160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:45:18.829518  381160 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:45:18.847379  381160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:45:18.874069  381160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:45:18.966060  381160 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1018 09:45:18.967699  381160 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-942905" to be "Ready" ...
	I1018 09:45:19.195371  381160 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:45:15.929652  381291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:16.429445  381291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:16.929972  381291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:17.429515  381291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:17.929927  381291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:18.430039  381291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:18.930031  381291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:19.430094  381291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:19.929452  381291 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:45:20.013238  381291 kubeadm.go:1113] duration metric: took 5.170035554s to wait for elevateKubeSystemPrivileges
	I1018 09:45:20.013281  381291 kubeadm.go:402] duration metric: took 16.187491129s to StartCluster
	I1018 09:45:20.013306  381291 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:20.013381  381291 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:45:20.015180  381291 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:20.015421  381291 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:45:20.015456  381291 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:45:20.015647  381291 config.go:182] Loaded profile config "newest-cni-708733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:20.015469  381291 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:45:20.015685  381291 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-708733"
	I1018 09:45:20.015708  381291 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-708733"
	I1018 09:45:20.015719  381291 addons.go:69] Setting default-storageclass=true in profile "newest-cni-708733"
	I1018 09:45:20.015742  381291 host.go:66] Checking if "newest-cni-708733" exists ...
	I1018 09:45:20.015748  381291 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-708733"
	I1018 09:45:20.016109  381291 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:20.016227  381291 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:20.016851  381291 out.go:179] * Verifying Kubernetes components...
	I1018 09:45:20.017958  381291 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:20.038793  381291 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:45:20.039481  381291 addons.go:238] Setting addon default-storageclass=true in "newest-cni-708733"
	I1018 09:45:20.039533  381291 host.go:66] Checking if "newest-cni-708733" exists ...
	I1018 09:45:20.040092  381291 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:20.040770  381291 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:45:20.040789  381291 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:45:20.040876  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:20.071038  381291 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:45:20.071071  381291 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:45:20.071101  381291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:20.071159  381291 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:20.097452  381291 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33211 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:20.114421  381291 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:45:20.176142  381291 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:45:20.192100  381291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:45:20.218362  381291 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:45:20.306015  381291 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1018 09:45:20.307638  381291 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:45:20.307705  381291 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:45:20.506543  381291 api_server.go:72] duration metric: took 491.084951ms to wait for apiserver process to appear ...
	I1018 09:45:20.506570  381291 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:45:20.506589  381291 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:45:20.512339  381291 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1018 09:45:20.513415  381291 api_server.go:141] control plane version: v1.34.1
	I1018 09:45:20.513447  381291 api_server.go:131] duration metric: took 6.869002ms to wait for apiserver health ...
	I1018 09:45:20.513457  381291 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:45:20.513419  381291 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:45:19.196561  381160 addons.go:514] duration metric: took 531.688902ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:45:19.471037  381160 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-942905" context rescaled to 1 replicas
	I1018 09:45:20.514937  381291 addons.go:514] duration metric: took 499.459422ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:45:20.516434  381291 system_pods.go:59] 8 kube-system pods found
	I1018 09:45:20.516462  381291 system_pods.go:61] "coredns-66bc5c9577-pcqqp" [56bb81cf-dbf6-45cd-8398-91762e3ce4a0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:45:20.516476  381291 system_pods.go:61] "etcd-newest-cni-708733" [b25803cb-7959-4752-b0e3-7f80be73ac86] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:45:20.516482  381291 system_pods.go:61] "kindnet-z7dcb" [77bfd17c-f58c-418b-8e31-c2893c4a3647] Running
	I1018 09:45:20.516488  381291 system_pods.go:61] "kube-apiserver-newest-cni-708733" [846be6bb-a108-477e-9128-e8d6d2e396bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:45:20.516494  381291 system_pods.go:61] "kube-controller-manager-newest-cni-708733" [82bcfbf8-19ab-4fd7-856f-f7eb0d2e887b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:45:20.516501  381291 system_pods.go:61] "kube-proxy-nq79m" [7618e803-4e75-4661-ab8d-99195c316305] Running
	I1018 09:45:20.516506  381291 system_pods.go:61] "kube-scheduler-newest-cni-708733" [5d3ff5b3-f4aa-4f9f-a1ce-6bc323fa29dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:45:20.516511  381291 system_pods.go:61] "storage-provisioner" [930742e4-08ac-435f-8ae3-a6bbf9a76bcd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:45:20.516525  381291 system_pods.go:74] duration metric: took 3.057156ms to wait for pod list to return data ...
	I1018 09:45:20.516536  381291 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:45:20.519374  381291 default_sa.go:45] found service account: "default"
	I1018 09:45:20.519392  381291 default_sa.go:55] duration metric: took 2.850542ms for default service account to be created ...
	I1018 09:45:20.519402  381291 kubeadm.go:586] duration metric: took 503.952349ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:45:20.519418  381291 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:45:20.521666  381291 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:45:20.521688  381291 node_conditions.go:123] node cpu capacity is 8
	I1018 09:45:20.521700  381291 node_conditions.go:105] duration metric: took 2.277186ms to run NodePressure ...
	I1018 09:45:20.521712  381291 start.go:241] waiting for startup goroutines ...
	I1018 09:45:20.811182  381291 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-708733" context rescaled to 1 replicas
	I1018 09:45:20.811221  381291 start.go:246] waiting for cluster config update ...
	I1018 09:45:20.811232  381291 start.go:255] writing updated cluster config ...
	I1018 09:45:20.811515  381291 ssh_runner.go:195] Run: rm -f paused
	I1018 09:45:20.860536  381291 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:45:20.862008  381291 out.go:179] * Done! kubectl is now configured to use "newest-cni-708733" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.646918114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.648980475Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1cae5a75-7091-4217-b9ea-a08c144a2384 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.64932093Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e0f4a5af-a4e9-4211-ae7a-14c22f8a09c3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.650516861Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.651139068Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.651289342Z" level=info msg="Ran pod sandbox 51ee561deff2fee679a98ce63ad7cf5da8edf7e2e80c0ccd5a74928ddba007d2 with infra container: kube-system/kindnet-z7dcb/POD" id=1cae5a75-7091-4217-b9ea-a08c144a2384 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.651723029Z" level=info msg="Ran pod sandbox fae6411c40d1c33f0c1877a3de62eac5fcd3858fce9b504de8bc7a0e715115f5 with infra container: kube-system/kube-proxy-nq79m/POD" id=e0f4a5af-a4e9-4211-ae7a-14c22f8a09c3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.652600176Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3ac386c7-e760-44cc-9067-7ede8e2d0244 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.65262774Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=eb655bfe-e542-40e8-88c8-2fed2f9ebcc8 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.653564409Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d15721d4-1402-4f82-85e9-336aa6517e65 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.653636715Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e77b6fb4-4dd6-4817-8a77-c73dfcd797a5 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.657310341Z" level=info msg="Creating container: kube-system/kindnet-z7dcb/kindnet-cni" id=3a54e979-4768-4ccd-8741-ba6214a5f837 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.657566846Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.658439398Z" level=info msg="Creating container: kube-system/kube-proxy-nq79m/kube-proxy" id=56489811-8c5b-4543-b550-e6b5a08007eb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.660411512Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.661709414Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.662285693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.665886671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.666303125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.689212458Z" level=info msg="Created container 9bd6dfd16b2a6f16c2868b434388bf448973502d1df151caa96e0b2942ee95a3: kube-system/kindnet-z7dcb/kindnet-cni" id=3a54e979-4768-4ccd-8741-ba6214a5f837 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.690079944Z" level=info msg="Starting container: 9bd6dfd16b2a6f16c2868b434388bf448973502d1df151caa96e0b2942ee95a3" id=a064f6c0-e210-4eef-9769-e35f34389034 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.692866612Z" level=info msg="Started container" PID=1503 containerID=9bd6dfd16b2a6f16c2868b434388bf448973502d1df151caa96e0b2942ee95a3 description=kube-system/kindnet-z7dcb/kindnet-cni id=a064f6c0-e210-4eef-9769-e35f34389034 name=/runtime.v1.RuntimeService/StartContainer sandboxID=51ee561deff2fee679a98ce63ad7cf5da8edf7e2e80c0ccd5a74928ddba007d2
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.694509611Z" level=info msg="Created container a75cae8335f56c46063913203c0e93986e2e5fcf15b431fd55e5d812e796c2a5: kube-system/kube-proxy-nq79m/kube-proxy" id=56489811-8c5b-4543-b550-e6b5a08007eb name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.696546113Z" level=info msg="Starting container: a75cae8335f56c46063913203c0e93986e2e5fcf15b431fd55e5d812e796c2a5" id=12655066-8c32-4d31-9206-c6bc2e127b2a name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:45:19 newest-cni-708733 crio[771]: time="2025-10-18T09:45:19.700070436Z" level=info msg="Started container" PID=1504 containerID=a75cae8335f56c46063913203c0e93986e2e5fcf15b431fd55e5d812e796c2a5 description=kube-system/kube-proxy-nq79m/kube-proxy id=12655066-8c32-4d31-9206-c6bc2e127b2a name=/runtime.v1.RuntimeService/StartContainer sandboxID=fae6411c40d1c33f0c1877a3de62eac5fcd3858fce9b504de8bc7a0e715115f5
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a75cae8335f56       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   2 seconds ago       Running             kube-proxy                0                   fae6411c40d1c       kube-proxy-nq79m                            kube-system
	9bd6dfd16b2a6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   51ee561deff2f       kindnet-z7dcb                               kube-system
	e1aa033e6b6ca       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   c5e30c187d885       kube-apiserver-newest-cni-708733            kube-system
	9a51f35dd9c50       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   2baaa6f5e05cc       kube-controller-manager-newest-cni-708733   kube-system
	09c0b6f732caa       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   915e13b9c9238       etcd-newest-cni-708733                      kube-system
	dd7d269428cf8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   cf196d31ce12c       kube-scheduler-newest-cni-708733            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-708733
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-708733
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=newest-cni-708733
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_45_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:45:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-708733
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:45:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:45:13 +0000   Sat, 18 Oct 2025 09:45:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:45:13 +0000   Sat, 18 Oct 2025 09:45:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:45:13 +0000   Sat, 18 Oct 2025 09:45:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 09:45:13 +0000   Sat, 18 Oct 2025 09:45:09 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-708733
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                b382c5a4-fd22-47f3-b8a6-fb04181833ca
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-708733                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9s
	  kube-system                 kindnet-z7dcb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-708733             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-708733    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-nq79m                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-708733             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 2s    kube-proxy       
	  Normal  Starting                 9s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s    kubelet          Node newest-cni-708733 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s    kubelet          Node newest-cni-708733 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s    kubelet          Node newest-cni-708733 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s    node-controller  Node newest-cni-708733 event: Registered Node newest-cni-708733 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [09c0b6f732caa7815898dae9b6fece560f3f0dd9b15aa5a01357dd027d583024] <==
	{"level":"warn","ts":"2025-10-18T09:45:10.308509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.319586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.329332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.338759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.347305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.356072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.363224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.371494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.379639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.386872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.407721Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.415581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.422117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.429892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.437992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.455410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.473164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.486427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.502685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.511762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.518236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.535915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.547258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.555513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.645511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60194","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:45:22 up  1:27,  0 user,  load average: 3.14, 2.93, 1.88
	Linux newest-cni-708733 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9bd6dfd16b2a6f16c2868b434388bf448973502d1df151caa96e0b2942ee95a3] <==
	I1018 09:45:19.843362       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:45:19.843614       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 09:45:19.843750       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:45:19.843766       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:45:19.843785       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:45:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:45:20.140027       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:45:20.140109       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:45:20.140123       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:45:20.199309       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:45:20.499035       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:45:20.499224       1 metrics.go:72] Registering metrics
	I1018 09:45:20.499405       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [e1aa033e6b6ca695d4ed1e0d8a6ac3e160c5704e39525efb702b6c76727ee489] <==
	I1018 09:45:11.217019       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:45:11.217043       1 policy_source.go:240] refreshing policies
	I1018 09:45:11.217759       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:45:11.243928       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:45:11.318266       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:45:11.318398       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1018 09:45:11.328342       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:45:11.328459       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1018 09:45:12.119816       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:45:12.126160       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:45:12.126178       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:45:12.680777       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:45:12.724204       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:45:12.824914       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:45:12.832499       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1018 09:45:12.834031       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:45:12.839444       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:45:13.575204       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:45:13.984444       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:45:14.000893       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:45:14.015251       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:45:19.327178       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1018 09:45:19.430308       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:45:19.434661       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:45:19.678207       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [9a51f35dd9c508ac0b6d90b43f661af7144a61d94ab8b41cdf340337dfea2a7e] <==
	I1018 09:45:18.573973       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:45:18.574001       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:45:18.574425       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:45:18.574446       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 09:45:18.574522       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:45:18.574571       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:45:18.574577       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:45:18.574602       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:45:18.574920       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:45:18.574956       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:45:18.575183       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:45:18.575197       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 09:45:18.575329       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:45:18.577443       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:45:18.578543       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:45:18.578612       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:45:18.578698       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-708733"
	I1018 09:45:18.578755       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 09:45:18.580887       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:45:18.585346       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:45:18.585461       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:45:18.587813       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:45:18.587963       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:45:18.588091       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:45:18.598445       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [a75cae8335f56c46063913203c0e93986e2e5fcf15b431fd55e5d812e796c2a5] <==
	I1018 09:45:19.736016       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:45:19.804017       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:45:19.904429       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:45:19.904474       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1018 09:45:19.904574       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:45:19.924759       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:45:19.924850       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:45:19.931325       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:45:19.931766       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:45:19.931802       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:45:19.933223       1 config.go:200] "Starting service config controller"
	I1018 09:45:19.933288       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:45:19.933321       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:45:19.933332       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:45:19.933411       1 config.go:309] "Starting node config controller"
	I1018 09:45:19.933418       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:45:19.933425       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:45:19.933908       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:45:19.933936       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:45:20.033506       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:45:20.033671       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:45:20.034698       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [dd7d269428cf8e5e1b432ba9b7971205eba893864d6f268dc86875865daf0bdc] <==
	I1018 09:45:11.935552       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:45:11.937705       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:45:11.937757       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:45:11.938059       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:45:11.938100       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1018 09:45:11.939612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 09:45:11.940074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:45:11.940226       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1018 09:45:11.940343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:45:11.943130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:45:11.943258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:45:11.943466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:45:11.943555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:45:11.943645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1018 09:45:11.943756       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:45:11.943870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:45:11.944127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:45:11.944704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:45:11.944766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:45:11.944919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:45:11.944930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1018 09:45:11.945048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:45:11.945116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:45:11.945608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1018 09:45:13.538157       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:45:14 newest-cni-708733 kubelet[1307]: I1018 09:45:14.030869    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/323e323b1e3adbcdbff264283b8cc8d5-etc-ca-certificates\") pod \"kube-apiserver-newest-cni-708733\" (UID: \"323e323b1e3adbcdbff264283b8cc8d5\") " pod="kube-system/kube-apiserver-newest-cni-708733"
	Oct 18 09:45:14 newest-cni-708733 kubelet[1307]: I1018 09:45:14.030914    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/323e323b1e3adbcdbff264283b8cc8d5-k8s-certs\") pod \"kube-apiserver-newest-cni-708733\" (UID: \"323e323b1e3adbcdbff264283b8cc8d5\") " pod="kube-system/kube-apiserver-newest-cni-708733"
	Oct 18 09:45:14 newest-cni-708733 kubelet[1307]: I1018 09:45:14.030944    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4c2b206bca2d87fbf095157352bbbce7-etc-ca-certificates\") pod \"kube-controller-manager-newest-cni-708733\" (UID: \"4c2b206bca2d87fbf095157352bbbce7\") " pod="kube-system/kube-controller-manager-newest-cni-708733"
	Oct 18 09:45:14 newest-cni-708733 kubelet[1307]: I1018 09:45:14.821662    1307 apiserver.go:52] "Watching apiserver"
	Oct 18 09:45:14 newest-cni-708733 kubelet[1307]: I1018 09:45:14.829732    1307 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 09:45:14 newest-cni-708733 kubelet[1307]: I1018 09:45:14.875507    1307 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-708733"
	Oct 18 09:45:14 newest-cni-708733 kubelet[1307]: I1018 09:45:14.875639    1307 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-708733"
	Oct 18 09:45:14 newest-cni-708733 kubelet[1307]: E1018 09:45:14.886979    1307 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-708733\" already exists" pod="kube-system/kube-apiserver-newest-cni-708733"
	Oct 18 09:45:14 newest-cni-708733 kubelet[1307]: E1018 09:45:14.887646    1307 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-708733\" already exists" pod="kube-system/kube-controller-manager-newest-cni-708733"
	Oct 18 09:45:14 newest-cni-708733 kubelet[1307]: I1018 09:45:14.899423    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-708733" podStartSLOduration=1.899402728 podStartE2EDuration="1.899402728s" podCreationTimestamp="2025-10-18 09:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:45:14.898313564 +0000 UTC m=+1.152867011" watchObservedRunningTime="2025-10-18 09:45:14.899402728 +0000 UTC m=+1.153956175"
	Oct 18 09:45:14 newest-cni-708733 kubelet[1307]: I1018 09:45:14.925059    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-708733" podStartSLOduration=1.925038175 podStartE2EDuration="1.925038175s" podCreationTimestamp="2025-10-18 09:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:45:14.911490844 +0000 UTC m=+1.166044289" watchObservedRunningTime="2025-10-18 09:45:14.925038175 +0000 UTC m=+1.179591618"
	Oct 18 09:45:14 newest-cni-708733 kubelet[1307]: I1018 09:45:14.925269    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-708733" podStartSLOduration=1.925260501 podStartE2EDuration="1.925260501s" podCreationTimestamp="2025-10-18 09:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:45:14.924754594 +0000 UTC m=+1.179308049" watchObservedRunningTime="2025-10-18 09:45:14.925260501 +0000 UTC m=+1.179813946"
	Oct 18 09:45:14 newest-cni-708733 kubelet[1307]: I1018 09:45:14.935669    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-708733" podStartSLOduration=1.9356454410000001 podStartE2EDuration="1.935645441s" podCreationTimestamp="2025-10-18 09:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:45:14.935616592 +0000 UTC m=+1.190170038" watchObservedRunningTime="2025-10-18 09:45:14.935645441 +0000 UTC m=+1.190198887"
	Oct 18 09:45:18 newest-cni-708733 kubelet[1307]: I1018 09:45:18.607783    1307 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 09:45:18 newest-cni-708733 kubelet[1307]: I1018 09:45:18.608528    1307 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 09:45:19 newest-cni-708733 kubelet[1307]: I1018 09:45:19.367317    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsz2l\" (UniqueName: \"kubernetes.io/projected/7618e803-4e75-4661-ab8d-99195c316305-kube-api-access-vsz2l\") pod \"kube-proxy-nq79m\" (UID: \"7618e803-4e75-4661-ab8d-99195c316305\") " pod="kube-system/kube-proxy-nq79m"
	Oct 18 09:45:19 newest-cni-708733 kubelet[1307]: I1018 09:45:19.367363    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77bfd17c-f58c-418b-8e31-c2893c4a3647-lib-modules\") pod \"kindnet-z7dcb\" (UID: \"77bfd17c-f58c-418b-8e31-c2893c4a3647\") " pod="kube-system/kindnet-z7dcb"
	Oct 18 09:45:19 newest-cni-708733 kubelet[1307]: I1018 09:45:19.367382    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xp52m\" (UniqueName: \"kubernetes.io/projected/77bfd17c-f58c-418b-8e31-c2893c4a3647-kube-api-access-xp52m\") pod \"kindnet-z7dcb\" (UID: \"77bfd17c-f58c-418b-8e31-c2893c4a3647\") " pod="kube-system/kindnet-z7dcb"
	Oct 18 09:45:19 newest-cni-708733 kubelet[1307]: I1018 09:45:19.367455    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7618e803-4e75-4661-ab8d-99195c316305-kube-proxy\") pod \"kube-proxy-nq79m\" (UID: \"7618e803-4e75-4661-ab8d-99195c316305\") " pod="kube-system/kube-proxy-nq79m"
	Oct 18 09:45:19 newest-cni-708733 kubelet[1307]: I1018 09:45:19.367503    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/77bfd17c-f58c-418b-8e31-c2893c4a3647-cni-cfg\") pod \"kindnet-z7dcb\" (UID: \"77bfd17c-f58c-418b-8e31-c2893c4a3647\") " pod="kube-system/kindnet-z7dcb"
	Oct 18 09:45:19 newest-cni-708733 kubelet[1307]: I1018 09:45:19.367529    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77bfd17c-f58c-418b-8e31-c2893c4a3647-xtables-lock\") pod \"kindnet-z7dcb\" (UID: \"77bfd17c-f58c-418b-8e31-c2893c4a3647\") " pod="kube-system/kindnet-z7dcb"
	Oct 18 09:45:19 newest-cni-708733 kubelet[1307]: I1018 09:45:19.367545    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7618e803-4e75-4661-ab8d-99195c316305-xtables-lock\") pod \"kube-proxy-nq79m\" (UID: \"7618e803-4e75-4661-ab8d-99195c316305\") " pod="kube-system/kube-proxy-nq79m"
	Oct 18 09:45:19 newest-cni-708733 kubelet[1307]: I1018 09:45:19.367567    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7618e803-4e75-4661-ab8d-99195c316305-lib-modules\") pod \"kube-proxy-nq79m\" (UID: \"7618e803-4e75-4661-ab8d-99195c316305\") " pod="kube-system/kube-proxy-nq79m"
	Oct 18 09:45:19 newest-cni-708733 kubelet[1307]: I1018 09:45:19.907341    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-z7dcb" podStartSLOduration=0.907315719 podStartE2EDuration="907.315719ms" podCreationTimestamp="2025-10-18 09:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:45:19.905168518 +0000 UTC m=+6.159721960" watchObservedRunningTime="2025-10-18 09:45:19.907315719 +0000 UTC m=+6.161869164"
	Oct 18 09:45:19 newest-cni-708733 kubelet[1307]: I1018 09:45:19.908005    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nq79m" podStartSLOduration=0.907986197 podStartE2EDuration="907.986197ms" podCreationTimestamp="2025-10-18 09:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:45:19.896445227 +0000 UTC m=+6.150998672" watchObservedRunningTime="2025-10-18 09:45:19.907986197 +0000 UTC m=+6.162539644"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-708733 -n newest-cni-708733
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-708733 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-pcqqp storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-708733 describe pod coredns-66bc5c9577-pcqqp storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-708733 describe pod coredns-66bc5c9577-pcqqp storage-provisioner: exit status 1 (62.423634ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-pcqqp" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-708733 describe pod coredns-66bc5c9577-pcqqp storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-942905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-942905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (286.921591ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:45:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-942905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-942905 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-942905 describe deploy/metrics-server -n kube-system: exit status 1 (109.513961ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-942905 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-942905
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-942905:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01",
	        "Created": "2025-10-18T09:44:58.37670581Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 382699,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:44:58.421405668Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01/hosts",
	        "LogPath": "/var/lib/docker/containers/b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01/b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01-json.log",
	        "Name": "/default-k8s-diff-port-942905",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-942905:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-942905",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01",
	                "LowerDir": "/var/lib/docker/overlay2/42f1fb875c8f127d75859f8b0b14ffd6f566d0b14d7eeb00ff5ff11c9fa43104-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/42f1fb875c8f127d75859f8b0b14ffd6f566d0b14d7eeb00ff5ff11c9fa43104/merged",
	                "UpperDir": "/var/lib/docker/overlay2/42f1fb875c8f127d75859f8b0b14ffd6f566d0b14d7eeb00ff5ff11c9fa43104/diff",
	                "WorkDir": "/var/lib/docker/overlay2/42f1fb875c8f127d75859f8b0b14ffd6f566d0b14d7eeb00ff5ff11c9fa43104/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-942905",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-942905/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-942905",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-942905",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-942905",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "09bc61c566af39b0ed65a6746a780b30b1b4efb5da8080db41cdf7534c682848",
	            "SandboxKey": "/var/run/docker/netns/09bc61c566af",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33206"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33207"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33210"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33208"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33209"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-942905": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:d3:75:3f:df:4a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0fd78e2b1cc4903dcfba13e124358f0be34e6a060a2c5a3353848c2f3b6de6b8",
	                    "EndpointID": "a985e3d06f1f139afaed4ea62e60207b78a389a6ec5ef10a24f2c131a2cc23c2",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-942905",
	                        "b1c05e040b9d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-942905 -n default-k8s-diff-port-942905
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-942905 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-942905 logs -n 25: (1.263040023s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-589869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:43 UTC │
	│ start   │ -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:43 UTC │ 18 Oct 25 09:44 UTC │
	│ image   │ old-k8s-version-619885 image list --format=json                                                                                                                                                                                               │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ pause   │ -p old-k8s-version-619885 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ delete  │ -p old-k8s-version-619885                                                                                                                                                                                                                     │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p old-k8s-version-619885                                                                                                                                                                                                                     │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ start   │ -p embed-certs-055175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:45 UTC │
	│ image   │ no-preload-589869 image list --format=json                                                                                                                                                                                                    │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ pause   │ -p no-preload-589869 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ start   │ -p cert-expiration-650496 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-650496       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p no-preload-589869                                                                                                                                                                                                                          │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p cert-expiration-650496                                                                                                                                                                                                                     │ cert-expiration-650496       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p no-preload-589869                                                                                                                                                                                                                          │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p disable-driver-mounts-399936                                                                                                                                                                                                               │ disable-driver-mounts-399936 │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ start   │ -p default-k8s-diff-port-942905 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p newest-cni-708733 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable metrics-server -p embed-certs-055175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ stop    │ -p embed-certs-055175 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable metrics-server -p newest-cni-708733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ stop    │ -p newest-cni-708733 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable dashboard -p embed-certs-055175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p embed-certs-055175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-708733 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p newest-cni-708733 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-942905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:45:35
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:45:35.612113  391835 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:45:35.612372  391835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:45:35.612384  391835 out.go:374] Setting ErrFile to fd 2...
	I1018 09:45:35.612390  391835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:45:35.612627  391835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:45:35.613204  391835 out.go:368] Setting JSON to false
	I1018 09:45:35.614405  391835 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5280,"bootTime":1760775456,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:45:35.614495  391835 start.go:141] virtualization: kvm guest
	I1018 09:45:35.616488  391835 out.go:179] * [newest-cni-708733] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:45:35.617763  391835 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:45:35.617765  391835 notify.go:220] Checking for updates...
	I1018 09:45:35.619047  391835 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:45:35.620517  391835 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:45:35.621508  391835 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:45:35.622619  391835 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:45:35.623653  391835 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:45:35.625265  391835 config.go:182] Loaded profile config "newest-cni-708733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:35.625773  391835 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:45:35.648625  391835 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:45:35.648730  391835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:45:35.707534  391835 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 09:45:35.696960967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:45:35.707701  391835 docker.go:318] overlay module found
	I1018 09:45:35.710095  391835 out.go:179] * Using the docker driver based on existing profile
	I1018 09:45:35.711170  391835 start.go:305] selected driver: docker
	I1018 09:45:35.711185  391835 start.go:925] validating driver "docker" against &{Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:45:35.711263  391835 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:45:35.711899  391835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:45:35.766563  391835 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 09:45:35.756934982 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:45:35.766911  391835 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:45:35.766941  391835 cni.go:84] Creating CNI manager for ""
	I1018 09:45:35.767009  391835 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:45:35.767062  391835 start.go:349] cluster config:
	{Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:45:35.768913  391835 out.go:179] * Starting "newest-cni-708733" primary control-plane node in "newest-cni-708733" cluster
	I1018 09:45:35.770258  391835 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:45:35.771551  391835 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:45:35.772648  391835 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:45:35.772696  391835 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:45:35.772707  391835 cache.go:58] Caching tarball of preloaded images
	I1018 09:45:35.772786  391835 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:45:35.772907  391835 preload.go:233] Found /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:45:35.772988  391835 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:45:35.773146  391835 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/config.json ...
	I1018 09:45:35.793193  391835 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:45:35.793211  391835 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:45:35.793226  391835 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:45:35.793247  391835 start.go:360] acquireMachinesLock for newest-cni-708733: {Name:mkb1aaee475623ac79c9cbc5f8d5e2dda34020d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:45:35.793300  391835 start.go:364] duration metric: took 36.906µs to acquireMachinesLock for "newest-cni-708733"
	I1018 09:45:35.793316  391835 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:45:35.793321  391835 fix.go:54] fixHost starting: 
	I1018 09:45:35.793514  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:35.810764  391835 fix.go:112] recreateIfNeeded on newest-cni-708733: state=Stopped err=<nil>
	W1018 09:45:35.810808  391835 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:45:32.487875  391061 out.go:252] * Restarting existing docker container for "embed-certs-055175" ...
	I1018 09:45:32.487930  391061 cli_runner.go:164] Run: docker start embed-certs-055175
	I1018 09:45:32.746738  391061 cli_runner.go:164] Run: docker container inspect embed-certs-055175 --format={{.State.Status}}
	I1018 09:45:32.766565  391061 kic.go:430] container "embed-certs-055175" state is running.
	I1018 09:45:32.767066  391061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-055175
	I1018 09:45:32.787489  391061 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/config.json ...
	I1018 09:45:32.787761  391061 machine.go:93] provisionDockerMachine start ...
	I1018 09:45:32.787860  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:32.807525  391061 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:32.807763  391061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1018 09:45:32.807779  391061 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:45:32.808459  391061 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36084->127.0.0.1:33217: read: connection reset by peer
	I1018 09:45:35.951449  391061 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-055175
	
	I1018 09:45:35.951481  391061 ubuntu.go:182] provisioning hostname "embed-certs-055175"
	I1018 09:45:35.951567  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:35.970253  391061 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:35.970525  391061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1018 09:45:35.970577  391061 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-055175 && echo "embed-certs-055175" | sudo tee /etc/hostname
	I1018 09:45:36.120062  391061 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-055175
	
	I1018 09:45:36.120141  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:36.139369  391061 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:36.139660  391061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1018 09:45:36.139685  391061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-055175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-055175/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-055175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:45:36.279283  391061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:45:36.279331  391061 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:45:36.279360  391061 ubuntu.go:190] setting up certificates
	I1018 09:45:36.279373  391061 provision.go:84] configureAuth start
	I1018 09:45:36.279436  391061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-055175
	I1018 09:45:36.301592  391061 provision.go:143] copyHostCerts
	I1018 09:45:36.301663  391061 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:45:36.301685  391061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:45:36.301767  391061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:45:36.301935  391061 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:45:36.301952  391061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:45:36.301999  391061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:45:36.302090  391061 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:45:36.302102  391061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:45:36.302140  391061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:45:36.302218  391061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.embed-certs-055175 san=[127.0.0.1 192.168.76.2 embed-certs-055175 localhost minikube]
	I1018 09:45:36.521938  391061 provision.go:177] copyRemoteCerts
	I1018 09:45:36.522007  391061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:45:36.522049  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:36.539806  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:36.638382  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:45:36.656542  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:45:36.674914  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:45:36.692375  391061 provision.go:87] duration metric: took 412.989421ms to configureAuth
	I1018 09:45:36.692399  391061 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:45:36.692583  391061 config.go:182] Loaded profile config "embed-certs-055175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:36.692696  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:36.711813  391061 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:36.712122  391061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1018 09:45:36.712145  391061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:45:36.996777  391061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:45:36.996808  391061 machine.go:96] duration metric: took 4.209028137s to provisionDockerMachine
	I1018 09:45:36.996838  391061 start.go:293] postStartSetup for "embed-certs-055175" (driver="docker")
	I1018 09:45:36.996853  391061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:45:36.996924  391061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:45:36.996992  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:37.015643  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:37.112419  391061 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:45:37.115866  391061 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:45:37.115892  391061 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:45:37.115901  391061 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:45:37.115940  391061 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:45:37.116006  391061 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:45:37.116105  391061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:45:37.123537  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:37.140936  391061 start.go:296] duration metric: took 144.080164ms for postStartSetup
	I1018 09:45:37.141011  391061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:45:37.141113  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:37.158840  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:37.254266  391061 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:45:37.258887  391061 fix.go:56] duration metric: took 4.791318273s for fixHost
	I1018 09:45:37.258913  391061 start.go:83] releasing machines lock for "embed-certs-055175", held for 4.791367111s
	I1018 09:45:37.258983  391061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-055175
	I1018 09:45:37.276795  391061 ssh_runner.go:195] Run: cat /version.json
	I1018 09:45:37.276844  391061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:45:37.276893  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:37.276895  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:37.295580  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:37.295867  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:37.442421  391061 ssh_runner.go:195] Run: systemctl --version
	I1018 09:45:37.449145  391061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:45:37.485446  391061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:45:37.490286  391061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:45:37.490344  391061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:45:37.498440  391061 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:45:37.498462  391061 start.go:495] detecting cgroup driver to use...
	I1018 09:45:37.498498  391061 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:45:37.498541  391061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:45:37.512575  391061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:45:37.524383  391061 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:45:37.524431  391061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:45:37.538338  391061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:45:37.550505  391061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:45:37.630207  391061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:45:37.707104  391061 docker.go:234] disabling docker service ...
	I1018 09:45:37.707165  391061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:45:37.721802  391061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:45:37.734681  391061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:45:37.810403  391061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:45:37.892105  391061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:45:37.904421  391061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:45:37.918908  391061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:45:37.919002  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.927975  391061 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:45:37.928025  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.937739  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.946621  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.955765  391061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:45:37.963854  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.972623  391061 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.981215  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.990025  391061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:45:37.997012  391061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:45:38.004111  391061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:38.083139  391061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:45:38.194280  391061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:45:38.194350  391061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:45:38.198391  391061 start.go:563] Will wait 60s for crictl version
	I1018 09:45:38.198444  391061 ssh_runner.go:195] Run: which crictl
	I1018 09:45:38.202260  391061 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:45:38.226451  391061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:45:38.226528  391061 ssh_runner.go:195] Run: crio --version
	I1018 09:45:38.255560  391061 ssh_runner.go:195] Run: crio --version
	I1018 09:45:38.285154  391061 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:45:36.049688  353123 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.056359588s)
	W1018 09:45:36.049730  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1018 09:45:36.049740  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:36.049755  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:36.082656  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:36.082690  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:36.185625  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:45:36.185657  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:45:36.223015  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:45:36.223045  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:36.257875  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:36.257910  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:36.320259  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:36.320290  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:45:38.286182  391061 cli_runner.go:164] Run: docker network inspect embed-certs-055175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:45:38.303601  391061 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:45:38.307969  391061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:38.318414  391061 kubeadm.go:883] updating cluster {Name:embed-certs-055175 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-055175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:45:38.318562  391061 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:45:38.318621  391061 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:38.351678  391061 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:38.351700  391061 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:45:38.351743  391061 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:38.376983  391061 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:38.377006  391061 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:45:38.377014  391061 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 09:45:38.377106  391061 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-055175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-055175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:45:38.377172  391061 ssh_runner.go:195] Run: crio config
	I1018 09:45:38.422001  391061 cni.go:84] Creating CNI manager for ""
	I1018 09:45:38.422023  391061 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:45:38.422042  391061 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:45:38.422063  391061 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-055175 NodeName:embed-certs-055175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:45:38.422186  391061 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-055175"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:45:38.422240  391061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:45:38.430216  391061 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:45:38.430276  391061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:45:38.438081  391061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:45:38.450317  391061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:45:38.462520  391061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:45:38.474657  391061 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:45:38.478282  391061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:38.488221  391061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:38.566896  391061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:45:38.591111  391061 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175 for IP: 192.168.76.2
	I1018 09:45:38.591138  391061 certs.go:195] generating shared ca certs ...
	I1018 09:45:38.591161  391061 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:38.591310  391061 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:45:38.591384  391061 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:45:38.591402  391061 certs.go:257] generating profile certs ...
	I1018 09:45:38.591504  391061 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/client.key
	I1018 09:45:38.591598  391061 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/apiserver.key.d17ebb9e
	I1018 09:45:38.591678  391061 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/proxy-client.key
	I1018 09:45:38.591811  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:45:38.591882  391061 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:45:38.591896  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:45:38.591930  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:45:38.591966  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:45:38.591999  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:45:38.592055  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:38.592628  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:45:38.611514  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:45:38.630402  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:45:38.649635  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:45:38.673181  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 09:45:38.692242  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:45:38.709954  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:45:38.728001  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:45:38.745902  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:45:38.763592  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:45:38.781470  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:45:38.799868  391061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:45:38.812542  391061 ssh_runner.go:195] Run: openssl version
	I1018 09:45:38.818721  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:45:38.827249  391061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:45:38.831071  391061 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:45:38.831126  391061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:45:38.867725  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:45:38.876160  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:45:38.884525  391061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:45:38.888219  391061 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:45:38.888264  391061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:45:38.922467  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:45:38.930945  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:45:38.939990  391061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:38.943700  391061 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:38.943757  391061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:38.978998  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:45:38.987211  391061 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:45:38.991075  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:45:39.025412  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:45:39.059499  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:45:39.101020  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:45:39.146140  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:45:39.199431  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:45:39.253543  391061 kubeadm.go:400] StartCluster: {Name:embed-certs-055175 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-055175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:45:39.253654  391061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:45:39.253726  391061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:45:39.287480  391061 cri.go:89] found id: "82544c7a6f005e8210b717c0d6b26b29c0f08fbb6fdae3002ce9107b8c924a75"
	I1018 09:45:39.287507  391061 cri.go:89] found id: "d8a28c141ac160edf272ee9ddf9b12c1548c335130b27e90935ec06e6a60642f"
	I1018 09:45:39.287514  391061 cri.go:89] found id: "f427b03ed9ce5db5e74d4069d94ff8948e7da816ad588e3c12fbdd22293cab0d"
	I1018 09:45:39.287518  391061 cri.go:89] found id: "0aa95bb2edd15a27792b1a5bd689c87bd9d233036c37d823aff945a5fee1dc2d"
	I1018 09:45:39.287523  391061 cri.go:89] found id: ""
	I1018 09:45:39.287581  391061 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:45:39.301644  391061 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:45:39Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:45:39.301714  391061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:45:39.310767  391061 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:45:39.310787  391061 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:45:39.310879  391061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:45:39.319011  391061 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:45:39.319811  391061 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-055175" does not appear in /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:45:39.320288  391061 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-131066/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-055175" cluster setting kubeconfig missing "embed-certs-055175" context setting]
	I1018 09:45:39.321074  391061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:39.322981  391061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:45:39.330833  391061 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 09:45:39.330865  391061 kubeadm.go:601] duration metric: took 20.071828ms to restartPrimaryControlPlane
	I1018 09:45:39.330874  391061 kubeadm.go:402] duration metric: took 77.343946ms to StartCluster
	I1018 09:45:39.330893  391061 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:39.330969  391061 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:45:39.332950  391061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:39.333199  391061 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:45:39.333382  391061 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:45:39.333486  391061 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-055175"
	I1018 09:45:39.333505  391061 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-055175"
	W1018 09:45:39.333518  391061 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:45:39.333527  391061 config.go:182] Loaded profile config "embed-certs-055175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:39.333540  391061 addons.go:69] Setting dashboard=true in profile "embed-certs-055175"
	I1018 09:45:39.333583  391061 addons.go:238] Setting addon dashboard=true in "embed-certs-055175"
	W1018 09:45:39.333594  391061 addons.go:247] addon dashboard should already be in state true
	I1018 09:45:39.333598  391061 host.go:66] Checking if "embed-certs-055175" exists ...
	I1018 09:45:39.333601  391061 addons.go:69] Setting default-storageclass=true in profile "embed-certs-055175"
	I1018 09:45:39.333631  391061 host.go:66] Checking if "embed-certs-055175" exists ...
	I1018 09:45:39.333630  391061 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-055175"
	I1018 09:45:39.334122  391061 cli_runner.go:164] Run: docker container inspect embed-certs-055175 --format={{.State.Status}}
	I1018 09:45:39.334143  391061 cli_runner.go:164] Run: docker container inspect embed-certs-055175 --format={{.State.Status}}
	I1018 09:45:39.334172  391061 cli_runner.go:164] Run: docker container inspect embed-certs-055175 --format={{.State.Status}}
	I1018 09:45:39.335198  391061 out.go:179] * Verifying Kubernetes components...
	I1018 09:45:39.336588  391061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:39.364419  391061 addons.go:238] Setting addon default-storageclass=true in "embed-certs-055175"
	W1018 09:45:39.364441  391061 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:45:39.364467  391061 host.go:66] Checking if "embed-certs-055175" exists ...
	I1018 09:45:39.364941  391061 cli_runner.go:164] Run: docker container inspect embed-certs-055175 --format={{.State.Status}}
	I1018 09:45:39.365279  391061 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:45:39.365348  391061 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:45:39.366461  391061 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:45:39.366483  391061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:45:39.366536  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:39.369244  391061 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:45:35.812543  391835 out.go:252] * Restarting existing docker container for "newest-cni-708733" ...
	I1018 09:45:35.812620  391835 cli_runner.go:164] Run: docker start newest-cni-708733
	I1018 09:45:36.066412  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:36.087638  391835 kic.go:430] container "newest-cni-708733" state is running.
	I1018 09:45:36.088075  391835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-708733
	I1018 09:45:36.108867  391835 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/config.json ...
	I1018 09:45:36.109119  391835 machine.go:93] provisionDockerMachine start ...
	I1018 09:45:36.109186  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:36.129372  391835 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:36.129746  391835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1018 09:45:36.129764  391835 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:45:36.130410  391835 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56176->127.0.0.1:33223: read: connection reset by peer
	I1018 09:45:39.281604  391835 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-708733
	
	I1018 09:45:39.281635  391835 ubuntu.go:182] provisioning hostname "newest-cni-708733"
	I1018 09:45:39.281704  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:39.304537  391835 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:39.304897  391835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1018 09:45:39.304921  391835 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-708733 && echo "newest-cni-708733" | sudo tee /etc/hostname
	I1018 09:45:39.471607  391835 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-708733
	
	I1018 09:45:39.471684  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:39.493328  391835 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:39.493535  391835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1018 09:45:39.493548  391835 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-708733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-708733/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-708733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:45:39.648618  391835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:45:39.648648  391835 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:45:39.648670  391835 ubuntu.go:190] setting up certificates
	I1018 09:45:39.648683  391835 provision.go:84] configureAuth start
	I1018 09:45:39.648740  391835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-708733
	I1018 09:45:39.671912  391835 provision.go:143] copyHostCerts
	I1018 09:45:39.671977  391835 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:45:39.672067  391835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:45:39.672162  391835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:45:39.672259  391835 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:45:39.672269  391835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:45:39.672296  391835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:45:39.672348  391835 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:45:39.672358  391835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:45:39.672380  391835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:45:39.672424  391835 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.newest-cni-708733 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-708733]
	I1018 09:45:39.936585  391835 provision.go:177] copyRemoteCerts
	I1018 09:45:39.936652  391835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:45:39.936752  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:39.959548  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:40.065365  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:45:40.086638  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:45:40.108607  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:45:40.132253  391835 provision.go:87] duration metric: took 483.553625ms to configureAuth
	I1018 09:45:40.132292  391835 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:45:40.132527  391835 config.go:182] Loaded profile config "newest-cni-708733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:40.132665  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:40.153078  391835 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:40.153352  391835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1018 09:45:40.153370  391835 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:45:40.448100  391835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:45:40.448132  391835 machine.go:96] duration metric: took 4.33899731s to provisionDockerMachine
	I1018 09:45:40.448147  391835 start.go:293] postStartSetup for "newest-cni-708733" (driver="docker")
	I1018 09:45:40.448162  391835 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:45:40.448233  391835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:45:40.448284  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:40.474620  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:40.577567  391835 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:45:40.582063  391835 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:45:40.582097  391835 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:45:40.582110  391835 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:45:40.582160  391835 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:45:40.582267  391835 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:45:40.582402  391835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:45:40.591516  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:39.370168  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:45:39.370188  391061 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:45:39.370247  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:39.400814  391061 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:45:39.400915  391061 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:45:39.400996  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:39.405011  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:39.407383  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:39.425670  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:39.505286  391061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:45:39.520778  391061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:45:39.523155  391061 node_ready.go:35] waiting up to 6m0s for node "embed-certs-055175" to be "Ready" ...
	I1018 09:45:39.523608  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:45:39.523631  391061 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:45:39.538779  391061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:45:39.539364  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:45:39.539438  391061 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:45:39.560150  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:45:39.560179  391061 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:45:39.581867  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:45:39.581933  391061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:45:39.596852  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:45:39.596884  391061 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:45:39.612014  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:45:39.612039  391061 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:45:39.626575  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:45:39.626600  391061 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:45:39.639500  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:45:39.639525  391061 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:45:39.654074  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:45:39.654098  391061 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:45:39.670286  391061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:45:40.899341  391061 node_ready.go:49] node "embed-certs-055175" is "Ready"
	I1018 09:45:40.899374  391061 node_ready.go:38] duration metric: took 1.376176965s for node "embed-certs-055175" to be "Ready" ...
	I1018 09:45:40.899390  391061 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:45:40.899443  391061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:45:41.576093  391061 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.055278034s)
	I1018 09:45:41.576162  391061 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.037341616s)
	I1018 09:45:41.576238  391061 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.905916583s)
	I1018 09:45:41.576274  391061 api_server.go:72] duration metric: took 2.24304532s to wait for apiserver process to appear ...
	I1018 09:45:41.576289  391061 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:45:41.576309  391061 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:45:41.578020  391061 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-055175 addons enable metrics-server
	
	I1018 09:45:41.582881  391061 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:45:41.582904  391061 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:45:41.589049  391061 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 09:45:40.621550  391835 start.go:296] duration metric: took 173.38515ms for postStartSetup
	I1018 09:45:40.621639  391835 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:45:40.621684  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:40.643288  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:40.745039  391835 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:45:40.750133  391835 fix.go:56] duration metric: took 4.956803913s for fixHost
	I1018 09:45:40.750167  391835 start.go:83] releasing machines lock for "newest-cni-708733", held for 4.95685606s
	I1018 09:45:40.750236  391835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-708733
	I1018 09:45:40.781167  391835 ssh_runner.go:195] Run: cat /version.json
	I1018 09:45:40.781292  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:40.781186  391835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:45:40.781618  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:40.812063  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:40.813770  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:40.940361  391835 ssh_runner.go:195] Run: systemctl --version
	I1018 09:45:41.006764  391835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:45:41.061782  391835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:45:41.068085  391835 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:45:41.068161  391835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:45:41.078354  391835 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:45:41.078379  391835 start.go:495] detecting cgroup driver to use...
	I1018 09:45:41.078424  391835 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:45:41.078467  391835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:45:41.098853  391835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:45:41.116027  391835 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:45:41.116089  391835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:45:41.133582  391835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:45:41.150108  391835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:45:41.258784  391835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:45:41.365493  391835 docker.go:234] disabling docker service ...
	I1018 09:45:41.365568  391835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:45:41.389182  391835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:45:41.405299  391835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:45:41.512499  391835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:45:41.597024  391835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:45:41.609959  391835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:45:41.624662  391835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:45:41.624735  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.634047  391835 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:45:41.634099  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.643165  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.652394  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.663256  391835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:45:41.672317  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.684071  391835 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.694058  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.705032  391835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:45:41.715244  391835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:45:41.725978  391835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:41.812310  391835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:45:41.928316  391835 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:45:41.928398  391835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:45:41.933300  391835 start.go:563] Will wait 60s for crictl version
	I1018 09:45:41.933375  391835 ssh_runner.go:195] Run: which crictl
	I1018 09:45:41.937695  391835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:45:41.968232  391835 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:45:41.968322  391835 ssh_runner.go:195] Run: crio --version
	I1018 09:45:42.008722  391835 ssh_runner.go:195] Run: crio --version
	I1018 09:45:42.051058  391835 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:45:42.052454  391835 cli_runner.go:164] Run: docker network inspect newest-cni-708733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:45:42.076948  391835 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 09:45:42.082993  391835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:42.098027  391835 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 09:45:41.590293  391061 addons.go:514] duration metric: took 2.256916495s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 09:45:42.076937  391061 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:45:42.081653  391061 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:45:42.081688  391061 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:45:42.099318  391835 kubeadm.go:883] updating cluster {Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:45:42.099457  391835 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:45:42.099596  391835 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:42.132475  391835 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:42.132500  391835 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:45:42.132566  391835 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:42.158774  391835 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:42.158804  391835 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:45:42.158815  391835 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 09:45:42.158983  391835 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-708733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:45:42.159100  391835 ssh_runner.go:195] Run: crio config
	I1018 09:45:42.208450  391835 cni.go:84] Creating CNI manager for ""
	I1018 09:45:42.208480  391835 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:45:42.208500  391835 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 09:45:42.208539  391835 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-708733 NodeName:newest-cni-708733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:45:42.208747  391835 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-708733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:45:42.208839  391835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:45:42.217704  391835 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:45:42.217771  391835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:45:42.225608  391835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:45:42.238980  391835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:45:42.255680  391835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:45:42.272042  391835 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:45:42.276501  391835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:42.289252  391835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:42.374516  391835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:45:42.395343  391835 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733 for IP: 192.168.103.2
	I1018 09:45:42.395365  391835 certs.go:195] generating shared ca certs ...
	I1018 09:45:42.395386  391835 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:42.395555  391835 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:45:42.395633  391835 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:45:42.395649  391835 certs.go:257] generating profile certs ...
	I1018 09:45:42.395732  391835 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/client.key
	I1018 09:45:42.395806  391835 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key.ffa152cd
	I1018 09:45:42.395874  391835 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.key
	I1018 09:45:42.395977  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:45:42.396006  391835 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:45:42.396018  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:45:42.396049  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:45:42.396085  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:45:42.396116  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:45:42.396170  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:42.396756  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:45:42.417067  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:45:42.439230  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:45:42.459862  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:45:42.484661  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:45:42.505965  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:45:42.524153  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:45:42.542892  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:45:42.561246  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:45:42.579007  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:45:42.601111  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:45:42.619543  391835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:45:42.632771  391835 ssh_runner.go:195] Run: openssl version
	I1018 09:45:42.639054  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:45:42.648098  391835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:45:42.652060  391835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:45:42.652121  391835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:45:42.689227  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:45:42.698817  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:45:42.709921  391835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:42.715254  391835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:42.715316  391835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:42.758602  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:45:42.767388  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:45:42.776532  391835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:45:42.780462  391835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:45:42.780530  391835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:45:42.817681  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:45:42.826307  391835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:45:42.830455  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:45:42.868283  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:45:42.914730  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:45:42.969311  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:45:43.013486  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:45:43.072727  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:45:43.117083  391835 kubeadm.go:400] StartCluster: {Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:45:43.117198  391835 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:45:43.117268  391835 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:45:43.149877  391835 cri.go:89] found id: "082f88526b1eea8a63828944e82308647c71102eb90a2a5ed82d5d2512799fce"
	I1018 09:45:43.149897  391835 cri.go:89] found id: "ff767733a82659e64d11546f810903a05069bae143f380f9bd6fbe22ce1533d9"
	I1018 09:45:43.149902  391835 cri.go:89] found id: "db7341eafc41ce4a3c0819db1d63993ca69b231784785233e8f6cde3e77357be"
	I1018 09:45:43.149907  391835 cri.go:89] found id: "4d31b12a89bd95e764894ba0f4011abf85b4b249605f6abee6c147e7b546795d"
	I1018 09:45:43.149910  391835 cri.go:89] found id: ""
	I1018 09:45:43.149950  391835 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:45:43.164027  391835 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:45:43Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:45:43.164105  391835 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:45:43.173542  391835 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:45:43.173562  391835 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:45:43.173610  391835 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:45:43.183087  391835 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:45:43.184252  391835 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-708733" does not appear in /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:45:43.185121  391835 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-131066/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-708733" cluster setting kubeconfig missing "newest-cni-708733" context setting]
	I1018 09:45:43.186065  391835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:43.188016  391835 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:45:43.197622  391835 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1018 09:45:43.197652  391835 kubeadm.go:601] duration metric: took 24.083385ms to restartPrimaryControlPlane
	I1018 09:45:43.197662  391835 kubeadm.go:402] duration metric: took 80.590487ms to StartCluster
	I1018 09:45:43.197680  391835 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:43.197747  391835 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:45:43.200187  391835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:43.200440  391835 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:45:43.200573  391835 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:45:43.200694  391835 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-708733"
	I1018 09:45:43.200697  391835 config.go:182] Loaded profile config "newest-cni-708733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:43.200716  391835 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-708733"
	W1018 09:45:43.200724  391835 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:45:43.200723  391835 addons.go:69] Setting dashboard=true in profile "newest-cni-708733"
	I1018 09:45:43.200740  391835 addons.go:69] Setting default-storageclass=true in profile "newest-cni-708733"
	I1018 09:45:43.200755  391835 host.go:66] Checking if "newest-cni-708733" exists ...
	I1018 09:45:43.200765  391835 addons.go:238] Setting addon dashboard=true in "newest-cni-708733"
	I1018 09:45:43.200767  391835 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-708733"
	W1018 09:45:43.200775  391835 addons.go:247] addon dashboard should already be in state true
	I1018 09:45:43.200809  391835 host.go:66] Checking if "newest-cni-708733" exists ...
	I1018 09:45:43.201120  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:43.201273  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:43.201290  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:43.203194  391835 out.go:179] * Verifying Kubernetes components...
	I1018 09:45:43.205674  391835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:43.230206  391835 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:45:43.230277  391835 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:45:43.231265  391835 addons.go:238] Setting addon default-storageclass=true in "newest-cni-708733"
	W1018 09:45:43.231300  391835 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:45:43.231412  391835 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:45:43.231426  391835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:45:43.231473  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:43.231666  391835 host.go:66] Checking if "newest-cni-708733" exists ...
	I1018 09:45:43.232269  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:43.232392  391835 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:45:38.888310  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:45:40.473062  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:55036->192.168.85.2:8443: read: connection reset by peer
	I1018 09:45:40.473131  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:45:40.473212  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:45:40.506845  353123 cri.go:89] found id: "19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:45:40.506916  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:40.506931  353123 cri.go:89] found id: ""
	I1018 09:45:40.506946  353123 logs.go:282] 2 containers: [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:45:40.507011  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:40.511163  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:40.515230  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:45:40.515304  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:45:40.546337  353123 cri.go:89] found id: ""
	I1018 09:45:40.546363  353123 logs.go:282] 0 containers: []
	W1018 09:45:40.546373  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:45:40.546380  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:45:40.546439  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:45:40.576467  353123 cri.go:89] found id: ""
	I1018 09:45:40.576496  353123 logs.go:282] 0 containers: []
	W1018 09:45:40.576507  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:45:40.576515  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:45:40.576575  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:45:40.618939  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:40.618964  353123 cri.go:89] found id: ""
	I1018 09:45:40.618974  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:45:40.619033  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:40.623516  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:45:40.623599  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:45:40.659535  353123 cri.go:89] found id: ""
	I1018 09:45:40.659564  353123 logs.go:282] 0 containers: []
	W1018 09:45:40.659575  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:45:40.659606  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:45:40.659671  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:45:40.693235  353123 cri.go:89] found id: "b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:45:40.693264  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:40.693269  353123 cri.go:89] found id: ""
	I1018 09:45:40.693279  353123 logs.go:282] 2 containers: [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:45:40.693345  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:40.698191  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:40.702375  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:45:40.702453  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:45:40.740227  353123 cri.go:89] found id: ""
	I1018 09:45:40.740255  353123 logs.go:282] 0 containers: []
	W1018 09:45:40.740266  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:45:40.740281  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:45:40.740346  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:45:40.778699  353123 cri.go:89] found id: ""
	I1018 09:45:40.778725  353123 logs.go:282] 0 containers: []
	W1018 09:45:40.778736  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:45:40.778752  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:45:40.778767  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:45:40.832286  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:40.832323  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:40.985957  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:45:40.986003  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	W1018 09:45:41.025599  353123 logs.go:130] failed kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d": Process exited with status 1
	stdout:
	
	stderr:
	E1018 09:45:41.021744    5929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d\": container with ID starting with 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d not found: ID does not exist" containerID="064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	time="2025-10-18T09:45:41Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d\": container with ID starting with 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d not found: ID does not exist"
	 output: 
	** stderr ** 
	E1018 09:45:41.021744    5929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d\": container with ID starting with 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d not found: ID does not exist" containerID="064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	time="2025-10-18T09:45:41Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d\": container with ID starting with 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d not found: ID does not exist"
	
	** /stderr **
	I1018 09:45:41.025624  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:41.025640  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:41.093529  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:45:41.093584  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:45:41.122401  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:45:41.122440  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:45:41.207097  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:45:41.207126  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:45:41.207143  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:45:41.249695  353123 logs.go:123] Gathering logs for kube-controller-manager [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88] ...
	I1018 09:45:41.249733  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:45:41.281023  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:41.281062  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:41.321273  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:41.321315  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	
	
	==> CRI-O <==
	Oct 18 09:45:30 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:30.180851492Z" level=info msg="Starting container: 9dd3fa83a920915776608f0cc9ac794db637077262001ec6937927975c3e494c" id=bf6476aa-d6ca-445b-b4b1-df010d54852a name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:45:30 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:30.182522883Z" level=info msg="Started container" PID=1836 containerID=9dd3fa83a920915776608f0cc9ac794db637077262001ec6937927975c3e494c description=kube-system/coredns-66bc5c9577-g6bf9/coredns id=bf6476aa-d6ca-445b-b4b1-df010d54852a name=/runtime.v1.RuntimeService/StartContainer sandboxID=34e9a7fd5dd3326b6ddd59e257b1b9d8f2811637dc841b82bc429b6f2248b7e3
	Oct 18 09:45:33 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:33.139734333Z" level=info msg="Running pod sandbox: default/busybox/POD" id=f00a4473-58a0-418f-8c4c-30d9a50a95be name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:45:33 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:33.139857333Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:33 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:33.144635609Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d344de1ccf0399c8d2faba7870d8f566706d49d21435462597a741de3e48a7fb UID:3d931b08-4593-4046-8efd-e406a9611796 NetNS:/var/run/netns/b230342b-ae13-4c43-bc31-1e641cee6094 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d12308}] Aliases:map[]}"
	Oct 18 09:45:33 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:33.144685746Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 18 09:45:33 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:33.154016595Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d344de1ccf0399c8d2faba7870d8f566706d49d21435462597a741de3e48a7fb UID:3d931b08-4593-4046-8efd-e406a9611796 NetNS:/var/run/netns/b230342b-ae13-4c43-bc31-1e641cee6094 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d12308}] Aliases:map[]}"
	Oct 18 09:45:33 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:33.154178777Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 18 09:45:33 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:33.154913712Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:45:33 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:33.15566798Z" level=info msg="Ran pod sandbox d344de1ccf0399c8d2faba7870d8f566706d49d21435462597a741de3e48a7fb with infra container: default/busybox/POD" id=f00a4473-58a0-418f-8c4c-30d9a50a95be name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:45:33 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:33.156900854Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4e34e6eb-da6a-4e24-95d3-322e69c7a2a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:33 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:33.157051173Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4e34e6eb-da6a-4e24-95d3-322e69c7a2a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:33 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:33.157105166Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=4e34e6eb-da6a-4e24-95d3-322e69c7a2a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:33 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:33.157878172Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1f719c3e-a9e4-447f-a14d-b348aff0ce16 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:45:33 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:33.161156865Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 18 09:45:35 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:35.190593598Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=1f719c3e-a9e4-447f-a14d-b348aff0ce16 name=/runtime.v1.ImageService/PullImage
	Oct 18 09:45:35 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:35.191410381Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=87d44c5c-197b-4da4-8159-0114b1206b45 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:35 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:35.192972776Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9762c70f-b002-4f90-8626-bc1b57b3f73f name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:35 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:35.197390493Z" level=info msg="Creating container: default/busybox/busybox" id=8582845e-5bbc-4930-b2e4-6a233e55920c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:45:35 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:35.198119001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:35 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:35.202243505Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:35 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:35.203493547Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:35 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:35.223097522Z" level=info msg="Created container 072d4ce4a5f28c3adc63861076e7a7546c170252604dc2e42ca5b731c0c201ae: default/busybox/busybox" id=8582845e-5bbc-4930-b2e4-6a233e55920c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:45:35 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:35.223672133Z" level=info msg="Starting container: 072d4ce4a5f28c3adc63861076e7a7546c170252604dc2e42ca5b731c0c201ae" id=4154f7f0-d8bc-4740-80db-091e05bf3114 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:45:35 default-k8s-diff-port-942905 crio[769]: time="2025-10-18T09:45:35.225300599Z" level=info msg="Started container" PID=1912 containerID=072d4ce4a5f28c3adc63861076e7a7546c170252604dc2e42ca5b731c0c201ae description=default/busybox/busybox id=4154f7f0-d8bc-4740-80db-091e05bf3114 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d344de1ccf0399c8d2faba7870d8f566706d49d21435462597a741de3e48a7fb
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	072d4ce4a5f28       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   d344de1ccf039       busybox                                                default
	9dd3fa83a9209       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      14 seconds ago      Running             coredns                   0                   34e9a7fd5dd33       coredns-66bc5c9577-g6bf9                               kube-system
	095d635a19cec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   cc67a7e240f35       storage-provisioner                                    kube-system
	943ebbb11d8eb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   ea464fcc79ca9       kube-proxy-x9fjs                                       kube-system
	c873249131296       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      25 seconds ago      Running             kindnet-cni               0                   7f1ad40f286d3       kindnet-xtmcm                                          kube-system
	1d29514a844de       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   09932c9b135d3       kube-scheduler-default-k8s-diff-port-942905            kube-system
	bef73128a3d19       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   7268d1ab4db03       kube-controller-manager-default-k8s-diff-port-942905   kube-system
	56116f71a7dbe       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   28f38391bdcd9       kube-apiserver-default-k8s-diff-port-942905            kube-system
	3a56869436b0b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   8a7a376538cf1       etcd-default-k8s-diff-port-942905                      kube-system
	
	
	==> coredns [9dd3fa83a920915776608f0cc9ac794db637077262001ec6937927975c3e494c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56253 - 50414 "HINFO IN 5087179879933844334.2009989607509953904. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.072044472s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-942905
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-942905
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=default-k8s-diff-port-942905
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_45_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:45:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-942905
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:45:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:45:43 +0000   Sat, 18 Oct 2025 09:45:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:45:43 +0000   Sat, 18 Oct 2025 09:45:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:45:43 +0000   Sat, 18 Oct 2025 09:45:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:45:43 +0000   Sat, 18 Oct 2025 09:45:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-942905
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                2840e9d8-1f17-40a1-ae4d-ed361a5c39b0
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-g6bf9                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-default-k8s-diff-port-942905                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-xtmcm                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-942905             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-942905    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-x9fjs                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-942905             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node default-k8s-diff-port-942905 event: Registered Node default-k8s-diff-port-942905 in Controller
	  Normal  NodeReady                15s                kubelet          Node default-k8s-diff-port-942905 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [3a56869436b0b6e77143fe2b557220b5797811bb19fb77758a78ed712ac35232] <==
	{"level":"warn","ts":"2025-10-18T09:45:09.799759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.808413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.818844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.827687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.837511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.845922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.854903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.868076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.873307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.882378Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.890780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.903085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.910819Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.919581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.927898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.934807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.952263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.963187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.968606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.976276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.984526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:09.996909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.004977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.013783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:10.076523Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59282","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:45:44 up  1:28,  0 user,  load average: 2.37, 2.76, 1.86
	Linux default-k8s-diff-port-942905 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c8732491312967dd6a7158d2f38967bfa84d7939a3849b827018ad069dc699fd] <==
	I1018 09:45:19.213754       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:45:19.285521       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1018 09:45:19.285693       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:45:19.285712       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:45:19.285735       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:45:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:45:19.608282       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:45:19.608349       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:45:19.608365       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:45:19.608535       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:45:19.708541       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:45:19.708706       1 metrics.go:72] Registering metrics
	I1018 09:45:19.708876       1 controller.go:711] "Syncing nftables rules"
	I1018 09:45:29.424024       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:45:29.424098       1 main.go:301] handling current node
	I1018 09:45:39.420348       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:45:39.420386       1 main.go:301] handling current node
	
	
	==> kube-apiserver [56116f71a7dbed0df85608130376f2b30fb143a99140b7691916b7f34d602403] <==
	I1018 09:45:10.671589       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 09:45:10.671620       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:45:10.671633       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:45:10.671641       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:45:10.671648       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:45:10.686302       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:45:10.852276       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:45:11.558244       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1018 09:45:11.564612       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1018 09:45:11.564636       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:45:12.109804       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:45:12.157912       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:45:12.264671       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1018 09:45:12.273001       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1018 09:45:12.274236       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:45:12.282572       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:45:12.586986       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:45:13.022617       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:45:13.033025       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1018 09:45:13.044166       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1018 09:45:18.340052       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:45:18.343670       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:45:18.438564       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:45:18.689941       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1018 09:45:42.926646       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8444->192.168.94.1:44492: use of closed network connection
	
	
	==> kube-controller-manager [bef73128a3d19121f11782aafd19b10d81598046e76e616e4e40c945a3d1c90d] <==
	I1018 09:45:17.585899       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:45:17.585931       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1018 09:45:17.585950       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:45:17.585962       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:45:17.586007       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:45:17.586026       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1018 09:45:17.586100       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:45:17.586107       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:45:17.586618       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:45:17.586801       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 09:45:17.587841       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1018 09:45:17.587864       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 09:45:17.592031       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 09:45:17.593281       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:45:17.594502       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:45:17.594547       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:45:17.594559       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:45:17.594565       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:45:17.594633       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:45:17.594759       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-942905"
	I1018 09:45:17.594864       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1018 09:45:17.597088       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:45:17.601450       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:45:17.611930       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:45:32.596970       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [943ebbb11d8ebaf7b27ee2bbe50a982acdbec40e8d7cd75e3e1d1e306b1df18e] <==
	I1018 09:45:19.112145       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:45:19.185219       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:45:19.285974       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:45:19.286003       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1018 09:45:19.286090       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:45:19.304812       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:45:19.304882       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:45:19.310584       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:45:19.311138       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:45:19.311178       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:45:19.312506       1 config.go:200] "Starting service config controller"
	I1018 09:45:19.312537       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:45:19.312575       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:45:19.312581       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:45:19.312592       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:45:19.312604       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:45:19.312707       1 config.go:309] "Starting node config controller"
	I1018 09:45:19.312714       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:45:19.312720       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:45:19.412731       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:45:19.412769       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:45:19.412790       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1d29514a844de20458ba43b9734789dadc094b5fbbf05d2f7d02241b47745825] <==
	E1018 09:45:10.621076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1018 09:45:10.621104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:45:10.621201       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:45:10.621324       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:45:10.621359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:45:10.621433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:45:10.621521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1018 09:45:10.621555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:45:10.621591       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:45:11.432641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1018 09:45:11.434780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1018 09:45:11.473554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1018 09:45:11.541912       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1018 09:45:11.592274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1018 09:45:11.603473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1018 09:45:11.624021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1018 09:45:11.659477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1018 09:45:11.680593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1018 09:45:11.749154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1018 09:45:11.760291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1018 09:45:11.795299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1018 09:45:11.807656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1018 09:45:11.847432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1018 09:45:11.916602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1018 09:45:13.912931       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:45:14 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:14.006268    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-942905" podStartSLOduration=1.006245798 podStartE2EDuration="1.006245798s" podCreationTimestamp="2025-10-18 09:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:45:14.006175349 +0000 UTC m=+1.213031020" watchObservedRunningTime="2025-10-18 09:45:14.006245798 +0000 UTC m=+1.213101460"
	Oct 18 09:45:14 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:14.006480    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-942905" podStartSLOduration=1.006469489 podStartE2EDuration="1.006469489s" podCreationTimestamp="2025-10-18 09:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:45:13.988228221 +0000 UTC m=+1.195083896" watchObservedRunningTime="2025-10-18 09:45:14.006469489 +0000 UTC m=+1.213325155"
	Oct 18 09:45:14 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:14.022200    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-942905" podStartSLOduration=1.020247355 podStartE2EDuration="1.020247355s" podCreationTimestamp="2025-10-18 09:45:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:45:14.019270011 +0000 UTC m=+1.226125677" watchObservedRunningTime="2025-10-18 09:45:14.020247355 +0000 UTC m=+1.227103021"
	Oct 18 09:45:17 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:17.635124    1295 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 18 09:45:17 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:17.635869    1295 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 18 09:45:18 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:18.823344    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/16ec7433-66c9-48fb-bd90-244a1b7986d7-lib-modules\") pod \"kube-proxy-x9fjs\" (UID: \"16ec7433-66c9-48fb-bd90-244a1b7986d7\") " pod="kube-system/kube-proxy-x9fjs"
	Oct 18 09:45:18 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:18.823457    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqwx4\" (UniqueName: \"kubernetes.io/projected/009f3589-2a75-43d6-8bf7-d80c5147bc32-kube-api-access-hqwx4\") pod \"kindnet-xtmcm\" (UID: \"009f3589-2a75-43d6-8bf7-d80c5147bc32\") " pod="kube-system/kindnet-xtmcm"
	Oct 18 09:45:18 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:18.823519    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/16ec7433-66c9-48fb-bd90-244a1b7986d7-kube-proxy\") pod \"kube-proxy-x9fjs\" (UID: \"16ec7433-66c9-48fb-bd90-244a1b7986d7\") " pod="kube-system/kube-proxy-x9fjs"
	Oct 18 09:45:18 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:18.823540    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/16ec7433-66c9-48fb-bd90-244a1b7986d7-xtables-lock\") pod \"kube-proxy-x9fjs\" (UID: \"16ec7433-66c9-48fb-bd90-244a1b7986d7\") " pod="kube-system/kube-proxy-x9fjs"
	Oct 18 09:45:18 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:18.823580    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/009f3589-2a75-43d6-8bf7-d80c5147bc32-cni-cfg\") pod \"kindnet-xtmcm\" (UID: \"009f3589-2a75-43d6-8bf7-d80c5147bc32\") " pod="kube-system/kindnet-xtmcm"
	Oct 18 09:45:18 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:18.823656    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/009f3589-2a75-43d6-8bf7-d80c5147bc32-lib-modules\") pod \"kindnet-xtmcm\" (UID: \"009f3589-2a75-43d6-8bf7-d80c5147bc32\") " pod="kube-system/kindnet-xtmcm"
	Oct 18 09:45:18 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:18.823699    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5gp6\" (UniqueName: \"kubernetes.io/projected/16ec7433-66c9-48fb-bd90-244a1b7986d7-kube-api-access-k5gp6\") pod \"kube-proxy-x9fjs\" (UID: \"16ec7433-66c9-48fb-bd90-244a1b7986d7\") " pod="kube-system/kube-proxy-x9fjs"
	Oct 18 09:45:18 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:18.823736    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/009f3589-2a75-43d6-8bf7-d80c5147bc32-xtables-lock\") pod \"kindnet-xtmcm\" (UID: \"009f3589-2a75-43d6-8bf7-d80c5147bc32\") " pod="kube-system/kindnet-xtmcm"
	Oct 18 09:45:19 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:19.950970    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x9fjs" podStartSLOduration=1.950948671 podStartE2EDuration="1.950948671s" podCreationTimestamp="2025-10-18 09:45:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:45:19.950867388 +0000 UTC m=+7.157723059" watchObservedRunningTime="2025-10-18 09:45:19.950948671 +0000 UTC m=+7.157804336"
	Oct 18 09:45:22 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:22.763484    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-xtmcm" podStartSLOduration=4.763463038 podStartE2EDuration="4.763463038s" podCreationTimestamp="2025-10-18 09:45:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:45:19.991483284 +0000 UTC m=+7.198338951" watchObservedRunningTime="2025-10-18 09:45:22.763463038 +0000 UTC m=+9.970318703"
	Oct 18 09:45:29 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:29.804919    1295 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 18 09:45:29 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:29.908087    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1cba89a-b3da-49cd-9f36-7fcbad7a969d-config-volume\") pod \"coredns-66bc5c9577-g6bf9\" (UID: \"e1cba89a-b3da-49cd-9f36-7fcbad7a969d\") " pod="kube-system/coredns-66bc5c9577-g6bf9"
	Oct 18 09:45:29 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:29.908137    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grsz9\" (UniqueName: \"kubernetes.io/projected/2ede4817-c456-41e7-a9f5-4495deed70de-kube-api-access-grsz9\") pod \"storage-provisioner\" (UID: \"2ede4817-c456-41e7-a9f5-4495deed70de\") " pod="kube-system/storage-provisioner"
	Oct 18 09:45:29 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:29.908174    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7476\" (UniqueName: \"kubernetes.io/projected/e1cba89a-b3da-49cd-9f36-7fcbad7a969d-kube-api-access-q7476\") pod \"coredns-66bc5c9577-g6bf9\" (UID: \"e1cba89a-b3da-49cd-9f36-7fcbad7a969d\") " pod="kube-system/coredns-66bc5c9577-g6bf9"
	Oct 18 09:45:29 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:29.908196    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2ede4817-c456-41e7-a9f5-4495deed70de-tmp\") pod \"storage-provisioner\" (UID: \"2ede4817-c456-41e7-a9f5-4495deed70de\") " pod="kube-system/storage-provisioner"
	Oct 18 09:45:30 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:30.979740    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.979717749 podStartE2EDuration="11.979717749s" podCreationTimestamp="2025-10-18 09:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:45:30.979593427 +0000 UTC m=+18.186449093" watchObservedRunningTime="2025-10-18 09:45:30.979717749 +0000 UTC m=+18.186573417"
	Oct 18 09:45:30 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:30.980045    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-g6bf9" podStartSLOduration=12.980017759999999 podStartE2EDuration="12.98001776s" podCreationTimestamp="2025-10-18 09:45:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-18 09:45:30.971487128 +0000 UTC m=+18.178342796" watchObservedRunningTime="2025-10-18 09:45:30.98001776 +0000 UTC m=+18.186873429"
	Oct 18 09:45:32 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:32.923922    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptfw8\" (UniqueName: \"kubernetes.io/projected/3d931b08-4593-4046-8efd-e406a9611796-kube-api-access-ptfw8\") pod \"busybox\" (UID: \"3d931b08-4593-4046-8efd-e406a9611796\") " pod="default/busybox"
	Oct 18 09:45:35 default-k8s-diff-port-942905 kubelet[1295]: I1018 09:45:35.986281    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.951240326 podStartE2EDuration="3.986255652s" podCreationTimestamp="2025-10-18 09:45:32 +0000 UTC" firstStartedPulling="2025-10-18 09:45:33.157397357 +0000 UTC m=+20.364253002" lastFinishedPulling="2025-10-18 09:45:35.192412667 +0000 UTC m=+22.399268328" observedRunningTime="2025-10-18 09:45:35.986072155 +0000 UTC m=+23.192927822" watchObservedRunningTime="2025-10-18 09:45:35.986255652 +0000 UTC m=+23.193111318"
	Oct 18 09:45:42 default-k8s-diff-port-942905 kubelet[1295]: E1018 09:45:42.926367    1295 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41124->127.0.0.1:36133: write tcp 127.0.0.1:41124->127.0.0.1:36133: write: broken pipe
	
	
	==> storage-provisioner [095d635a19cec021163ffeb1797f68d3f5685e809350a8c7f2920944fe0a4b14] <==
	I1018 09:45:30.185278       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:45:30.193386       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:45:30.193422       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:45:30.195581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:30.201365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:45:30.201485       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:45:30.201655       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-942905_2b3908d8-eb08-4053-bb3e-9eebe896a522!
	I1018 09:45:30.201647       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc0e8d2d-9133-4c3a-bcf4-257c6fc89570", APIVersion:"v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-942905_2b3908d8-eb08-4053-bb3e-9eebe896a522 became leader
	W1018 09:45:30.206022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:30.209407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:45:30.302398       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-942905_2b3908d8-eb08-4053-bb3e-9eebe896a522!
	W1018 09:45:32.212297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:32.216020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:34.219708       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:34.224896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:36.228605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:36.234208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:38.237707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:38.241462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:40.245430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:40.249861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:42.252786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:42.258375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:44.261881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:45:44.267525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-942905 -n default-k8s-diff-port-942905
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-942905 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-708733 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-708733 --alsologtostderr -v=1: exit status 80 (1.983782092s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-708733 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:45:47.658877  396863 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:45:47.659148  396863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:45:47.659161  396863 out.go:374] Setting ErrFile to fd 2...
	I1018 09:45:47.659167  396863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:45:47.659372  396863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:45:47.659676  396863 out.go:368] Setting JSON to false
	I1018 09:45:47.659733  396863 mustload.go:65] Loading cluster: newest-cni-708733
	I1018 09:45:47.660212  396863 config.go:182] Loaded profile config "newest-cni-708733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:47.660806  396863 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:47.680225  396863 host.go:66] Checking if "newest-cni-708733" exists ...
	I1018 09:45:47.680578  396863 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:45:47.749021  396863 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:83 OomKillDisable:false NGoroutines:94 SystemTime:2025-10-18 09:45:47.733275085 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:45:47.749927  396863 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-708733 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:45:47.752737  396863 out.go:179] * Pausing node newest-cni-708733 ... 
	I1018 09:45:47.754225  396863 host.go:66] Checking if "newest-cni-708733" exists ...
	I1018 09:45:47.754468  396863 ssh_runner.go:195] Run: systemctl --version
	I1018 09:45:47.754503  396863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:47.774877  396863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:47.874943  396863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:45:47.888631  396863 pause.go:52] kubelet running: true
	I1018 09:45:47.888706  396863 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:45:48.036454  396863 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:45:48.036554  396863 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:45:48.102273  396863 cri.go:89] found id: "ed56304dada2b69f3a06fd413ad71a4d41081d77b43ef79dbd43751753447b68"
	I1018 09:45:48.102297  396863 cri.go:89] found id: "204965dc89584beb2735f7dcd8dd4fad9d6cd7d7794b97b5aebd873dae276d20"
	I1018 09:45:48.102303  396863 cri.go:89] found id: "082f88526b1eea8a63828944e82308647c71102eb90a2a5ed82d5d2512799fce"
	I1018 09:45:48.102308  396863 cri.go:89] found id: "ff767733a82659e64d11546f810903a05069bae143f380f9bd6fbe22ce1533d9"
	I1018 09:45:48.102313  396863 cri.go:89] found id: "db7341eafc41ce4a3c0819db1d63993ca69b231784785233e8f6cde3e77357be"
	I1018 09:45:48.102320  396863 cri.go:89] found id: "4d31b12a89bd95e764894ba0f4011abf85b4b249605f6abee6c147e7b546795d"
	I1018 09:45:48.102324  396863 cri.go:89] found id: ""
	I1018 09:45:48.102385  396863 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:45:48.114727  396863 retry.go:31] will retry after 337.511317ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:45:48Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:45:48.453074  396863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:45:48.469790  396863 pause.go:52] kubelet running: false
	I1018 09:45:48.469925  396863 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:45:48.633896  396863 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:45:48.633991  396863 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:45:48.737735  396863 cri.go:89] found id: "ed56304dada2b69f3a06fd413ad71a4d41081d77b43ef79dbd43751753447b68"
	I1018 09:45:48.737767  396863 cri.go:89] found id: "204965dc89584beb2735f7dcd8dd4fad9d6cd7d7794b97b5aebd873dae276d20"
	I1018 09:45:48.737773  396863 cri.go:89] found id: "082f88526b1eea8a63828944e82308647c71102eb90a2a5ed82d5d2512799fce"
	I1018 09:45:48.737778  396863 cri.go:89] found id: "ff767733a82659e64d11546f810903a05069bae143f380f9bd6fbe22ce1533d9"
	I1018 09:45:48.737782  396863 cri.go:89] found id: "db7341eafc41ce4a3c0819db1d63993ca69b231784785233e8f6cde3e77357be"
	I1018 09:45:48.737788  396863 cri.go:89] found id: "4d31b12a89bd95e764894ba0f4011abf85b4b249605f6abee6c147e7b546795d"
	I1018 09:45:48.737922  396863 cri.go:89] found id: ""
	I1018 09:45:48.737977  396863 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:45:48.754080  396863 retry.go:31] will retry after 543.527734ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:45:48Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:45:49.297866  396863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:45:49.314265  396863 pause.go:52] kubelet running: false
	I1018 09:45:49.314324  396863 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:45:49.483753  396863 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:45:49.483870  396863 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:45:49.565922  396863 cri.go:89] found id: "ed56304dada2b69f3a06fd413ad71a4d41081d77b43ef79dbd43751753447b68"
	I1018 09:45:49.565949  396863 cri.go:89] found id: "204965dc89584beb2735f7dcd8dd4fad9d6cd7d7794b97b5aebd873dae276d20"
	I1018 09:45:49.565954  396863 cri.go:89] found id: "082f88526b1eea8a63828944e82308647c71102eb90a2a5ed82d5d2512799fce"
	I1018 09:45:49.565960  396863 cri.go:89] found id: "ff767733a82659e64d11546f810903a05069bae143f380f9bd6fbe22ce1533d9"
	I1018 09:45:49.565964  396863 cri.go:89] found id: "db7341eafc41ce4a3c0819db1d63993ca69b231784785233e8f6cde3e77357be"
	I1018 09:45:49.565969  396863 cri.go:89] found id: "4d31b12a89bd95e764894ba0f4011abf85b4b249605f6abee6c147e7b546795d"
	I1018 09:45:49.565973  396863 cri.go:89] found id: ""
	I1018 09:45:49.566035  396863 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:45:49.582316  396863 out.go:203] 
	W1018 09:45:49.583496  396863 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:45:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:45:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:45:49.583518  396863 out.go:285] * 
	* 
	W1018 09:45:49.588950  396863 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:45:49.590268  396863 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-708733 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-708733
helpers_test.go:243: (dbg) docker inspect newest-cni-708733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475",
	        "Created": "2025-10-18T09:44:58.376755553Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 392060,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:45:35.839912456Z",
	            "FinishedAt": "2025-10-18T09:45:35.036264253Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475/hostname",
	        "HostsPath": "/var/lib/docker/containers/589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475/hosts",
	        "LogPath": "/var/lib/docker/containers/589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475/589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475-json.log",
	        "Name": "/newest-cni-708733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-708733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-708733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475",
	                "LowerDir": "/var/lib/docker/overlay2/ffa4278b5a07c15f6b720f56ebd3b4e4ff8e546aecc435abdd6e9191da7093aa-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ffa4278b5a07c15f6b720f56ebd3b4e4ff8e546aecc435abdd6e9191da7093aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ffa4278b5a07c15f6b720f56ebd3b4e4ff8e546aecc435abdd6e9191da7093aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ffa4278b5a07c15f6b720f56ebd3b4e4ff8e546aecc435abdd6e9191da7093aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-708733",
	                "Source": "/var/lib/docker/volumes/newest-cni-708733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-708733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-708733",
	                "name.minikube.sigs.k8s.io": "newest-cni-708733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "92b19aa89f60f57ca70370f1e3221723209f4ebdf217098a82c0c9b5059ae9b7",
	            "SandboxKey": "/var/run/docker/netns/92b19aa89f60",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33223"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33224"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33227"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33225"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33226"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-708733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:bc:0b:1b:4b:70",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1aaffc18dfa2904bed47c15aa8ec5d5036ec16333dc17a28b2beac767bfe6ebf",
	                    "EndpointID": "5a5be5fb230dbc668317223daaa59feae514027529df9380695c66e8c2376ff5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-708733",
	                        "589c5abc3dda"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708733 -n newest-cni-708733
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708733 -n newest-cni-708733: exit status 2 (363.61945ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-708733 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-708733 logs -n 25: (1.467142679s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-619885 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ delete  │ -p old-k8s-version-619885                                                                                                                                                                                                                     │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p old-k8s-version-619885                                                                                                                                                                                                                     │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ start   │ -p embed-certs-055175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:45 UTC │
	│ image   │ no-preload-589869 image list --format=json                                                                                                                                                                                                    │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ pause   │ -p no-preload-589869 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ start   │ -p cert-expiration-650496 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-650496       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p no-preload-589869                                                                                                                                                                                                                          │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p cert-expiration-650496                                                                                                                                                                                                                     │ cert-expiration-650496       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p no-preload-589869                                                                                                                                                                                                                          │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p disable-driver-mounts-399936                                                                                                                                                                                                               │ disable-driver-mounts-399936 │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ start   │ -p default-k8s-diff-port-942905 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p newest-cni-708733 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable metrics-server -p embed-certs-055175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ stop    │ -p embed-certs-055175 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable metrics-server -p newest-cni-708733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ stop    │ -p newest-cni-708733 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable dashboard -p embed-certs-055175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p embed-certs-055175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-708733 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p newest-cni-708733 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-942905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-942905 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ image   │ newest-cni-708733 image list --format=json                                                                                                                                                                                                    │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ pause   │ -p newest-cni-708733 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:45:35
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:45:35.612113  391835 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:45:35.612372  391835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:45:35.612384  391835 out.go:374] Setting ErrFile to fd 2...
	I1018 09:45:35.612390  391835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:45:35.612627  391835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:45:35.613204  391835 out.go:368] Setting JSON to false
	I1018 09:45:35.614405  391835 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5280,"bootTime":1760775456,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:45:35.614495  391835 start.go:141] virtualization: kvm guest
	I1018 09:45:35.616488  391835 out.go:179] * [newest-cni-708733] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:45:35.617763  391835 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:45:35.617765  391835 notify.go:220] Checking for updates...
	I1018 09:45:35.619047  391835 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:45:35.620517  391835 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:45:35.621508  391835 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:45:35.622619  391835 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:45:35.623653  391835 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:45:35.625265  391835 config.go:182] Loaded profile config "newest-cni-708733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:35.625773  391835 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:45:35.648625  391835 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:45:35.648730  391835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:45:35.707534  391835 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 09:45:35.696960967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:45:35.707701  391835 docker.go:318] overlay module found
	I1018 09:45:35.710095  391835 out.go:179] * Using the docker driver based on existing profile
	I1018 09:45:35.711170  391835 start.go:305] selected driver: docker
	I1018 09:45:35.711185  391835 start.go:925] validating driver "docker" against &{Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:45:35.711263  391835 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:45:35.711899  391835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:45:35.766563  391835 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 09:45:35.756934982 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:45:35.766911  391835 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:45:35.766941  391835 cni.go:84] Creating CNI manager for ""
	I1018 09:45:35.767009  391835 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:45:35.767062  391835 start.go:349] cluster config:
	{Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:45:35.768913  391835 out.go:179] * Starting "newest-cni-708733" primary control-plane node in "newest-cni-708733" cluster
	I1018 09:45:35.770258  391835 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:45:35.771551  391835 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:45:35.772648  391835 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:45:35.772696  391835 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:45:35.772707  391835 cache.go:58] Caching tarball of preloaded images
	I1018 09:45:35.772786  391835 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:45:35.772907  391835 preload.go:233] Found /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:45:35.772988  391835 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:45:35.773146  391835 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/config.json ...
	I1018 09:45:35.793193  391835 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:45:35.793211  391835 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:45:35.793226  391835 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:45:35.793247  391835 start.go:360] acquireMachinesLock for newest-cni-708733: {Name:mkb1aaee475623ac79c9cbc5f8d5e2dda34020d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:45:35.793300  391835 start.go:364] duration metric: took 36.906µs to acquireMachinesLock for "newest-cni-708733"
	I1018 09:45:35.793316  391835 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:45:35.793321  391835 fix.go:54] fixHost starting: 
	I1018 09:45:35.793514  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:35.810764  391835 fix.go:112] recreateIfNeeded on newest-cni-708733: state=Stopped err=<nil>
	W1018 09:45:35.810808  391835 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:45:32.487875  391061 out.go:252] * Restarting existing docker container for "embed-certs-055175" ...
	I1018 09:45:32.487930  391061 cli_runner.go:164] Run: docker start embed-certs-055175
	I1018 09:45:32.746738  391061 cli_runner.go:164] Run: docker container inspect embed-certs-055175 --format={{.State.Status}}
	I1018 09:45:32.766565  391061 kic.go:430] container "embed-certs-055175" state is running.
	I1018 09:45:32.767066  391061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-055175
	I1018 09:45:32.787489  391061 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/config.json ...
	I1018 09:45:32.787761  391061 machine.go:93] provisionDockerMachine start ...
	I1018 09:45:32.787860  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:32.807525  391061 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:32.807763  391061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1018 09:45:32.807779  391061 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:45:32.808459  391061 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36084->127.0.0.1:33217: read: connection reset by peer
	I1018 09:45:35.951449  391061 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-055175
	
	I1018 09:45:35.951481  391061 ubuntu.go:182] provisioning hostname "embed-certs-055175"
	I1018 09:45:35.951567  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:35.970253  391061 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:35.970525  391061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1018 09:45:35.970577  391061 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-055175 && echo "embed-certs-055175" | sudo tee /etc/hostname
	I1018 09:45:36.120062  391061 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-055175
	
	I1018 09:45:36.120141  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:36.139369  391061 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:36.139660  391061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1018 09:45:36.139685  391061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-055175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-055175/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-055175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:45:36.279283  391061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:45:36.279331  391061 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:45:36.279360  391061 ubuntu.go:190] setting up certificates
	I1018 09:45:36.279373  391061 provision.go:84] configureAuth start
	I1018 09:45:36.279436  391061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-055175
	I1018 09:45:36.301592  391061 provision.go:143] copyHostCerts
	I1018 09:45:36.301663  391061 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:45:36.301685  391061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:45:36.301767  391061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:45:36.301935  391061 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:45:36.301952  391061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:45:36.301999  391061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:45:36.302090  391061 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:45:36.302102  391061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:45:36.302140  391061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:45:36.302218  391061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.embed-certs-055175 san=[127.0.0.1 192.168.76.2 embed-certs-055175 localhost minikube]
	I1018 09:45:36.521938  391061 provision.go:177] copyRemoteCerts
	I1018 09:45:36.522007  391061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:45:36.522049  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:36.539806  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:36.638382  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:45:36.656542  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:45:36.674914  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:45:36.692375  391061 provision.go:87] duration metric: took 412.989421ms to configureAuth
	I1018 09:45:36.692399  391061 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:45:36.692583  391061 config.go:182] Loaded profile config "embed-certs-055175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:36.692696  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:36.711813  391061 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:36.712122  391061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1018 09:45:36.712145  391061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:45:36.996777  391061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:45:36.996808  391061 machine.go:96] duration metric: took 4.209028137s to provisionDockerMachine
	I1018 09:45:36.996838  391061 start.go:293] postStartSetup for "embed-certs-055175" (driver="docker")
	I1018 09:45:36.996853  391061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:45:36.996924  391061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:45:36.996992  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:37.015643  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:37.112419  391061 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:45:37.115866  391061 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:45:37.115892  391061 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:45:37.115901  391061 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:45:37.115940  391061 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:45:37.116006  391061 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:45:37.116105  391061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:45:37.123537  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:37.140936  391061 start.go:296] duration metric: took 144.080164ms for postStartSetup
	I1018 09:45:37.141011  391061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:45:37.141113  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:37.158840  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:37.254266  391061 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:45:37.258887  391061 fix.go:56] duration metric: took 4.791318273s for fixHost
	I1018 09:45:37.258913  391061 start.go:83] releasing machines lock for "embed-certs-055175", held for 4.791367111s
	I1018 09:45:37.258983  391061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-055175
	I1018 09:45:37.276795  391061 ssh_runner.go:195] Run: cat /version.json
	I1018 09:45:37.276844  391061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:45:37.276893  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:37.276895  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:37.295580  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:37.295867  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:37.442421  391061 ssh_runner.go:195] Run: systemctl --version
	I1018 09:45:37.449145  391061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:45:37.485446  391061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:45:37.490286  391061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:45:37.490344  391061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:45:37.498440  391061 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:45:37.498462  391061 start.go:495] detecting cgroup driver to use...
	I1018 09:45:37.498498  391061 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:45:37.498541  391061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:45:37.512575  391061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:45:37.524383  391061 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:45:37.524431  391061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:45:37.538338  391061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:45:37.550505  391061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:45:37.630207  391061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:45:37.707104  391061 docker.go:234] disabling docker service ...
	I1018 09:45:37.707165  391061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:45:37.721802  391061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:45:37.734681  391061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:45:37.810403  391061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:45:37.892105  391061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:45:37.904421  391061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:45:37.918908  391061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:45:37.919002  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.927975  391061 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:45:37.928025  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.937739  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.946621  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.955765  391061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:45:37.963854  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.972623  391061 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.981215  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.990025  391061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:45:37.997012  391061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:45:38.004111  391061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:38.083139  391061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:45:38.194280  391061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:45:38.194350  391061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:45:38.198391  391061 start.go:563] Will wait 60s for crictl version
	I1018 09:45:38.198444  391061 ssh_runner.go:195] Run: which crictl
	I1018 09:45:38.202260  391061 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:45:38.226451  391061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:45:38.226528  391061 ssh_runner.go:195] Run: crio --version
	I1018 09:45:38.255560  391061 ssh_runner.go:195] Run: crio --version
	I1018 09:45:38.285154  391061 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:45:36.049688  353123 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.056359588s)
	W1018 09:45:36.049730  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1018 09:45:36.049740  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:36.049755  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:36.082656  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:36.082690  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:36.185625  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:45:36.185657  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:45:36.223015  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:45:36.223045  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:36.257875  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:36.257910  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:36.320259  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:36.320290  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:45:38.286182  391061 cli_runner.go:164] Run: docker network inspect embed-certs-055175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:45:38.303601  391061 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:45:38.307969  391061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:38.318414  391061 kubeadm.go:883] updating cluster {Name:embed-certs-055175 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-055175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:45:38.318562  391061 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:45:38.318621  391061 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:38.351678  391061 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:38.351700  391061 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:45:38.351743  391061 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:38.376983  391061 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:38.377006  391061 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:45:38.377014  391061 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 09:45:38.377106  391061 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-055175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-055175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:45:38.377172  391061 ssh_runner.go:195] Run: crio config
	I1018 09:45:38.422001  391061 cni.go:84] Creating CNI manager for ""
	I1018 09:45:38.422023  391061 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:45:38.422042  391061 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:45:38.422063  391061 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-055175 NodeName:embed-certs-055175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:45:38.422186  391061 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-055175"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:45:38.422240  391061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:45:38.430216  391061 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:45:38.430276  391061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:45:38.438081  391061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:45:38.450317  391061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:45:38.462520  391061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:45:38.474657  391061 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:45:38.478282  391061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:38.488221  391061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:38.566896  391061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:45:38.591111  391061 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175 for IP: 192.168.76.2
	I1018 09:45:38.591138  391061 certs.go:195] generating shared ca certs ...
	I1018 09:45:38.591161  391061 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:38.591310  391061 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:45:38.591384  391061 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:45:38.591402  391061 certs.go:257] generating profile certs ...
	I1018 09:45:38.591504  391061 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/client.key
	I1018 09:45:38.591598  391061 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/apiserver.key.d17ebb9e
	I1018 09:45:38.591678  391061 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/proxy-client.key
	I1018 09:45:38.591811  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:45:38.591882  391061 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:45:38.591896  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:45:38.591930  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:45:38.591966  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:45:38.591999  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:45:38.592055  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:38.592628  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:45:38.611514  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:45:38.630402  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:45:38.649635  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:45:38.673181  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 09:45:38.692242  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:45:38.709954  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:45:38.728001  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:45:38.745902  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:45:38.763592  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:45:38.781470  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:45:38.799868  391061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:45:38.812542  391061 ssh_runner.go:195] Run: openssl version
	I1018 09:45:38.818721  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:45:38.827249  391061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:45:38.831071  391061 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:45:38.831126  391061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:45:38.867725  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:45:38.876160  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:45:38.884525  391061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:45:38.888219  391061 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:45:38.888264  391061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:45:38.922467  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:45:38.930945  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:45:38.939990  391061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:38.943700  391061 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:38.943757  391061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:38.978998  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:45:38.987211  391061 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:45:38.991075  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:45:39.025412  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:45:39.059499  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:45:39.101020  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:45:39.146140  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:45:39.199431  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:45:39.253543  391061 kubeadm.go:400] StartCluster: {Name:embed-certs-055175 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-055175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:45:39.253654  391061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:45:39.253726  391061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:45:39.287480  391061 cri.go:89] found id: "82544c7a6f005e8210b717c0d6b26b29c0f08fbb6fdae3002ce9107b8c924a75"
	I1018 09:45:39.287507  391061 cri.go:89] found id: "d8a28c141ac160edf272ee9ddf9b12c1548c335130b27e90935ec06e6a60642f"
	I1018 09:45:39.287514  391061 cri.go:89] found id: "f427b03ed9ce5db5e74d4069d94ff8948e7da816ad588e3c12fbdd22293cab0d"
	I1018 09:45:39.287518  391061 cri.go:89] found id: "0aa95bb2edd15a27792b1a5bd689c87bd9d233036c37d823aff945a5fee1dc2d"
	I1018 09:45:39.287523  391061 cri.go:89] found id: ""
	I1018 09:45:39.287581  391061 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:45:39.301644  391061 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:45:39Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:45:39.301714  391061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:45:39.310767  391061 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:45:39.310787  391061 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:45:39.310879  391061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:45:39.319011  391061 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:45:39.319811  391061 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-055175" does not appear in /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:45:39.320288  391061 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-131066/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-055175" cluster setting kubeconfig missing "embed-certs-055175" context setting]
	I1018 09:45:39.321074  391061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:39.322981  391061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:45:39.330833  391061 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 09:45:39.330865  391061 kubeadm.go:601] duration metric: took 20.071828ms to restartPrimaryControlPlane
	I1018 09:45:39.330874  391061 kubeadm.go:402] duration metric: took 77.343946ms to StartCluster
	I1018 09:45:39.330893  391061 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:39.330969  391061 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:45:39.332950  391061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:39.333199  391061 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:45:39.333382  391061 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:45:39.333486  391061 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-055175"
	I1018 09:45:39.333505  391061 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-055175"
	W1018 09:45:39.333518  391061 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:45:39.333527  391061 config.go:182] Loaded profile config "embed-certs-055175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:39.333540  391061 addons.go:69] Setting dashboard=true in profile "embed-certs-055175"
	I1018 09:45:39.333583  391061 addons.go:238] Setting addon dashboard=true in "embed-certs-055175"
	W1018 09:45:39.333594  391061 addons.go:247] addon dashboard should already be in state true
	I1018 09:45:39.333598  391061 host.go:66] Checking if "embed-certs-055175" exists ...
	I1018 09:45:39.333601  391061 addons.go:69] Setting default-storageclass=true in profile "embed-certs-055175"
	I1018 09:45:39.333631  391061 host.go:66] Checking if "embed-certs-055175" exists ...
	I1018 09:45:39.333630  391061 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-055175"
	I1018 09:45:39.334122  391061 cli_runner.go:164] Run: docker container inspect embed-certs-055175 --format={{.State.Status}}
	I1018 09:45:39.334143  391061 cli_runner.go:164] Run: docker container inspect embed-certs-055175 --format={{.State.Status}}
	I1018 09:45:39.334172  391061 cli_runner.go:164] Run: docker container inspect embed-certs-055175 --format={{.State.Status}}
	I1018 09:45:39.335198  391061 out.go:179] * Verifying Kubernetes components...
	I1018 09:45:39.336588  391061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:39.364419  391061 addons.go:238] Setting addon default-storageclass=true in "embed-certs-055175"
	W1018 09:45:39.364441  391061 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:45:39.364467  391061 host.go:66] Checking if "embed-certs-055175" exists ...
	I1018 09:45:39.364941  391061 cli_runner.go:164] Run: docker container inspect embed-certs-055175 --format={{.State.Status}}
	I1018 09:45:39.365279  391061 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:45:39.365348  391061 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:45:39.366461  391061 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:45:39.366483  391061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:45:39.366536  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:39.369244  391061 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:45:35.812543  391835 out.go:252] * Restarting existing docker container for "newest-cni-708733" ...
	I1018 09:45:35.812620  391835 cli_runner.go:164] Run: docker start newest-cni-708733
	I1018 09:45:36.066412  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:36.087638  391835 kic.go:430] container "newest-cni-708733" state is running.
	I1018 09:45:36.088075  391835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-708733
	I1018 09:45:36.108867  391835 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/config.json ...
	I1018 09:45:36.109119  391835 machine.go:93] provisionDockerMachine start ...
	I1018 09:45:36.109186  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:36.129372  391835 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:36.129746  391835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1018 09:45:36.129764  391835 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:45:36.130410  391835 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56176->127.0.0.1:33223: read: connection reset by peer
	I1018 09:45:39.281604  391835 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-708733
	
	I1018 09:45:39.281635  391835 ubuntu.go:182] provisioning hostname "newest-cni-708733"
	I1018 09:45:39.281704  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:39.304537  391835 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:39.304897  391835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1018 09:45:39.304921  391835 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-708733 && echo "newest-cni-708733" | sudo tee /etc/hostname
	I1018 09:45:39.471607  391835 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-708733
	
	I1018 09:45:39.471684  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:39.493328  391835 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:39.493535  391835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1018 09:45:39.493548  391835 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-708733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-708733/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-708733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:45:39.648618  391835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:45:39.648648  391835 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:45:39.648670  391835 ubuntu.go:190] setting up certificates
	I1018 09:45:39.648683  391835 provision.go:84] configureAuth start
	I1018 09:45:39.648740  391835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-708733
	I1018 09:45:39.671912  391835 provision.go:143] copyHostCerts
	I1018 09:45:39.671977  391835 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:45:39.672067  391835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:45:39.672162  391835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:45:39.672259  391835 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:45:39.672269  391835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:45:39.672296  391835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:45:39.672348  391835 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:45:39.672358  391835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:45:39.672380  391835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:45:39.672424  391835 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.newest-cni-708733 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-708733]
	I1018 09:45:39.936585  391835 provision.go:177] copyRemoteCerts
	I1018 09:45:39.936652  391835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:45:39.936752  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:39.959548  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:40.065365  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:45:40.086638  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:45:40.108607  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:45:40.132253  391835 provision.go:87] duration metric: took 483.553625ms to configureAuth
	I1018 09:45:40.132292  391835 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:45:40.132527  391835 config.go:182] Loaded profile config "newest-cni-708733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:40.132665  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:40.153078  391835 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:40.153352  391835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1018 09:45:40.153370  391835 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:45:40.448100  391835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:45:40.448132  391835 machine.go:96] duration metric: took 4.33899731s to provisionDockerMachine
	I1018 09:45:40.448147  391835 start.go:293] postStartSetup for "newest-cni-708733" (driver="docker")
	I1018 09:45:40.448162  391835 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:45:40.448233  391835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:45:40.448284  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:40.474620  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:40.577567  391835 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:45:40.582063  391835 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:45:40.582097  391835 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:45:40.582110  391835 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:45:40.582160  391835 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:45:40.582267  391835 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:45:40.582402  391835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:45:40.591516  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:39.370168  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:45:39.370188  391061 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:45:39.370247  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:39.400814  391061 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:45:39.400915  391061 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:45:39.400996  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:39.405011  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:39.407383  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:39.425670  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:39.505286  391061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:45:39.520778  391061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:45:39.523155  391061 node_ready.go:35] waiting up to 6m0s for node "embed-certs-055175" to be "Ready" ...
	I1018 09:45:39.523608  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:45:39.523631  391061 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:45:39.538779  391061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:45:39.539364  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:45:39.539438  391061 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:45:39.560150  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:45:39.560179  391061 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:45:39.581867  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:45:39.581933  391061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:45:39.596852  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:45:39.596884  391061 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:45:39.612014  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:45:39.612039  391061 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:45:39.626575  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:45:39.626600  391061 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:45:39.639500  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:45:39.639525  391061 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:45:39.654074  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:45:39.654098  391061 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:45:39.670286  391061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:45:40.899341  391061 node_ready.go:49] node "embed-certs-055175" is "Ready"
	I1018 09:45:40.899374  391061 node_ready.go:38] duration metric: took 1.376176965s for node "embed-certs-055175" to be "Ready" ...
	I1018 09:45:40.899390  391061 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:45:40.899443  391061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:45:41.576093  391061 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.055278034s)
	I1018 09:45:41.576162  391061 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.037341616s)
	I1018 09:45:41.576238  391061 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.905916583s)
	I1018 09:45:41.576274  391061 api_server.go:72] duration metric: took 2.24304532s to wait for apiserver process to appear ...
	I1018 09:45:41.576289  391061 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:45:41.576309  391061 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:45:41.578020  391061 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-055175 addons enable metrics-server
	
	I1018 09:45:41.582881  391061 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:45:41.582904  391061 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:45:41.589049  391061 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 09:45:40.621550  391835 start.go:296] duration metric: took 173.38515ms for postStartSetup
	I1018 09:45:40.621639  391835 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:45:40.621684  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:40.643288  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:40.745039  391835 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:45:40.750133  391835 fix.go:56] duration metric: took 4.956803913s for fixHost
	I1018 09:45:40.750167  391835 start.go:83] releasing machines lock for "newest-cni-708733", held for 4.95685606s
	I1018 09:45:40.750236  391835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-708733
	I1018 09:45:40.781167  391835 ssh_runner.go:195] Run: cat /version.json
	I1018 09:45:40.781292  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:40.781186  391835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:45:40.781618  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:40.812063  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:40.813770  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:40.940361  391835 ssh_runner.go:195] Run: systemctl --version
	I1018 09:45:41.006764  391835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:45:41.061782  391835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:45:41.068085  391835 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:45:41.068161  391835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:45:41.078354  391835 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:45:41.078379  391835 start.go:495] detecting cgroup driver to use...
	I1018 09:45:41.078424  391835 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:45:41.078467  391835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:45:41.098853  391835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:45:41.116027  391835 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:45:41.116089  391835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:45:41.133582  391835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:45:41.150108  391835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:45:41.258784  391835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:45:41.365493  391835 docker.go:234] disabling docker service ...
	I1018 09:45:41.365568  391835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:45:41.389182  391835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:45:41.405299  391835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:45:41.512499  391835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:45:41.597024  391835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:45:41.609959  391835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:45:41.624662  391835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:45:41.624735  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.634047  391835 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:45:41.634099  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.643165  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.652394  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.663256  391835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:45:41.672317  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.684071  391835 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.694058  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.705032  391835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:45:41.715244  391835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:45:41.725978  391835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:41.812310  391835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:45:41.928316  391835 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:45:41.928398  391835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:45:41.933300  391835 start.go:563] Will wait 60s for crictl version
	I1018 09:45:41.933375  391835 ssh_runner.go:195] Run: which crictl
	I1018 09:45:41.937695  391835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:45:41.968232  391835 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:45:41.968322  391835 ssh_runner.go:195] Run: crio --version
	I1018 09:45:42.008722  391835 ssh_runner.go:195] Run: crio --version
	I1018 09:45:42.051058  391835 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:45:42.052454  391835 cli_runner.go:164] Run: docker network inspect newest-cni-708733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:45:42.076948  391835 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 09:45:42.082993  391835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:42.098027  391835 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 09:45:41.590293  391061 addons.go:514] duration metric: took 2.256916495s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 09:45:42.076937  391061 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:45:42.081653  391061 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:45:42.081688  391061 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:45:42.099318  391835 kubeadm.go:883] updating cluster {Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:45:42.099457  391835 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:45:42.099596  391835 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:42.132475  391835 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:42.132500  391835 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:45:42.132566  391835 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:42.158774  391835 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:42.158804  391835 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:45:42.158815  391835 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 09:45:42.158983  391835 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-708733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:45:42.159100  391835 ssh_runner.go:195] Run: crio config
	I1018 09:45:42.208450  391835 cni.go:84] Creating CNI manager for ""
	I1018 09:45:42.208480  391835 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:45:42.208500  391835 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 09:45:42.208539  391835 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-708733 NodeName:newest-cni-708733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:45:42.208747  391835 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-708733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:45:42.208839  391835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:45:42.217704  391835 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:45:42.217771  391835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:45:42.225608  391835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:45:42.238980  391835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:45:42.255680  391835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:45:42.272042  391835 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:45:42.276501  391835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:42.289252  391835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:42.374516  391835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:45:42.395343  391835 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733 for IP: 192.168.103.2
	I1018 09:45:42.395365  391835 certs.go:195] generating shared ca certs ...
	I1018 09:45:42.395386  391835 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:42.395555  391835 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:45:42.395633  391835 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:45:42.395649  391835 certs.go:257] generating profile certs ...
	I1018 09:45:42.395732  391835 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/client.key
	I1018 09:45:42.395806  391835 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key.ffa152cd
	I1018 09:45:42.395874  391835 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.key
	I1018 09:45:42.395977  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:45:42.396006  391835 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:45:42.396018  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:45:42.396049  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:45:42.396085  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:45:42.396116  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:45:42.396170  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:42.396756  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:45:42.417067  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:45:42.439230  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:45:42.459862  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:45:42.484661  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:45:42.505965  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:45:42.524153  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:45:42.542892  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:45:42.561246  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:45:42.579007  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:45:42.601111  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:45:42.619543  391835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:45:42.632771  391835 ssh_runner.go:195] Run: openssl version
	I1018 09:45:42.639054  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:45:42.648098  391835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:45:42.652060  391835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:45:42.652121  391835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:45:42.689227  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:45:42.698817  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:45:42.709921  391835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:42.715254  391835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:42.715316  391835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:42.758602  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:45:42.767388  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:45:42.776532  391835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:45:42.780462  391835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:45:42.780530  391835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:45:42.817681  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:45:42.826307  391835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:45:42.830455  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:45:42.868283  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:45:42.914730  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:45:42.969311  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:45:43.013486  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:45:43.072727  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:45:43.117083  391835 kubeadm.go:400] StartCluster: {Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:45:43.117198  391835 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:45:43.117268  391835 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:45:43.149877  391835 cri.go:89] found id: "082f88526b1eea8a63828944e82308647c71102eb90a2a5ed82d5d2512799fce"
	I1018 09:45:43.149897  391835 cri.go:89] found id: "ff767733a82659e64d11546f810903a05069bae143f380f9bd6fbe22ce1533d9"
	I1018 09:45:43.149902  391835 cri.go:89] found id: "db7341eafc41ce4a3c0819db1d63993ca69b231784785233e8f6cde3e77357be"
	I1018 09:45:43.149907  391835 cri.go:89] found id: "4d31b12a89bd95e764894ba0f4011abf85b4b249605f6abee6c147e7b546795d"
	I1018 09:45:43.149910  391835 cri.go:89] found id: ""
	I1018 09:45:43.149950  391835 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:45:43.164027  391835 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:45:43Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:45:43.164105  391835 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:45:43.173542  391835 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:45:43.173562  391835 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:45:43.173610  391835 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:45:43.183087  391835 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:45:43.184252  391835 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-708733" does not appear in /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:45:43.185121  391835 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-131066/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-708733" cluster setting kubeconfig missing "newest-cni-708733" context setting]
	I1018 09:45:43.186065  391835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:43.188016  391835 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:45:43.197622  391835 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1018 09:45:43.197652  391835 kubeadm.go:601] duration metric: took 24.083385ms to restartPrimaryControlPlane
	I1018 09:45:43.197662  391835 kubeadm.go:402] duration metric: took 80.590487ms to StartCluster
	I1018 09:45:43.197680  391835 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:43.197747  391835 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:45:43.200187  391835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:43.200440  391835 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:45:43.200573  391835 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:45:43.200694  391835 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-708733"
	I1018 09:45:43.200697  391835 config.go:182] Loaded profile config "newest-cni-708733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:43.200716  391835 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-708733"
	W1018 09:45:43.200724  391835 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:45:43.200723  391835 addons.go:69] Setting dashboard=true in profile "newest-cni-708733"
	I1018 09:45:43.200740  391835 addons.go:69] Setting default-storageclass=true in profile "newest-cni-708733"
	I1018 09:45:43.200755  391835 host.go:66] Checking if "newest-cni-708733" exists ...
	I1018 09:45:43.200765  391835 addons.go:238] Setting addon dashboard=true in "newest-cni-708733"
	I1018 09:45:43.200767  391835 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-708733"
	W1018 09:45:43.200775  391835 addons.go:247] addon dashboard should already be in state true
	I1018 09:45:43.200809  391835 host.go:66] Checking if "newest-cni-708733" exists ...
	I1018 09:45:43.201120  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:43.201273  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:43.201290  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:43.203194  391835 out.go:179] * Verifying Kubernetes components...
	I1018 09:45:43.205674  391835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:43.230206  391835 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:45:43.230277  391835 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:45:43.231265  391835 addons.go:238] Setting addon default-storageclass=true in "newest-cni-708733"
	W1018 09:45:43.231300  391835 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:45:43.231412  391835 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:45:43.231426  391835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:45:43.231473  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:43.231666  391835 host.go:66] Checking if "newest-cni-708733" exists ...
	I1018 09:45:43.232269  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:43.232392  391835 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:45:38.888310  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:45:40.473062  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:55036->192.168.85.2:8443: read: connection reset by peer
	I1018 09:45:40.473131  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:45:40.473212  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:45:40.506845  353123 cri.go:89] found id: "19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:45:40.506916  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:40.506931  353123 cri.go:89] found id: ""
	I1018 09:45:40.506946  353123 logs.go:282] 2 containers: [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:45:40.507011  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:40.511163  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:40.515230  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:45:40.515304  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:45:40.546337  353123 cri.go:89] found id: ""
	I1018 09:45:40.546363  353123 logs.go:282] 0 containers: []
	W1018 09:45:40.546373  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:45:40.546380  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:45:40.546439  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:45:40.576467  353123 cri.go:89] found id: ""
	I1018 09:45:40.576496  353123 logs.go:282] 0 containers: []
	W1018 09:45:40.576507  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:45:40.576515  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:45:40.576575  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:45:40.618939  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:40.618964  353123 cri.go:89] found id: ""
	I1018 09:45:40.618974  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:45:40.619033  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:40.623516  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:45:40.623599  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:45:40.659535  353123 cri.go:89] found id: ""
	I1018 09:45:40.659564  353123 logs.go:282] 0 containers: []
	W1018 09:45:40.659575  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:45:40.659606  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:45:40.659671  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:45:40.693235  353123 cri.go:89] found id: "b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:45:40.693264  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:40.693269  353123 cri.go:89] found id: ""
	I1018 09:45:40.693279  353123 logs.go:282] 2 containers: [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:45:40.693345  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:40.698191  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:40.702375  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:45:40.702453  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:45:40.740227  353123 cri.go:89] found id: ""
	I1018 09:45:40.740255  353123 logs.go:282] 0 containers: []
	W1018 09:45:40.740266  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:45:40.740281  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:45:40.740346  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:45:40.778699  353123 cri.go:89] found id: ""
	I1018 09:45:40.778725  353123 logs.go:282] 0 containers: []
	W1018 09:45:40.778736  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:45:40.778752  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:45:40.778767  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:45:40.832286  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:40.832323  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:40.985957  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:45:40.986003  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	W1018 09:45:41.025599  353123 logs.go:130] failed kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d": Process exited with status 1
	stdout:
	
	stderr:
	E1018 09:45:41.021744    5929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d\": container with ID starting with 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d not found: ID does not exist" containerID="064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	time="2025-10-18T09:45:41Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d\": container with ID starting with 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d not found: ID does not exist"
	 output: 
	** stderr ** 
	E1018 09:45:41.021744    5929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d\": container with ID starting with 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d not found: ID does not exist" containerID="064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	time="2025-10-18T09:45:41Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d\": container with ID starting with 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d not found: ID does not exist"
	
	** /stderr **
	I1018 09:45:41.025624  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:41.025640  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:41.093529  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:45:41.093584  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:45:41.122401  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:45:41.122440  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:45:41.207097  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:45:41.207126  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:45:41.207143  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:45:41.249695  353123 logs.go:123] Gathering logs for kube-controller-manager [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88] ...
	I1018 09:45:41.249733  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:45:41.281023  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:41.281062  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:41.321273  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:41.321315  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:45:43.233701  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:45:43.233733  391835 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:45:43.233795  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:43.268387  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:43.269203  391835 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:45:43.269219  391835 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:45:43.269275  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:43.274680  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:43.297168  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:43.370291  391835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:45:43.386972  391835 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:45:43.387031  391835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:45:43.392747  391835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:45:43.403892  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:45:43.403918  391835 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:45:43.408215  391835 api_server.go:72] duration metric: took 207.741406ms to wait for apiserver process to appear ...
	I1018 09:45:43.408237  391835 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:45:43.408255  391835 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:45:43.422788  391835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:45:43.424491  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:45:43.424556  391835 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:45:43.453971  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:45:43.454064  391835 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:45:43.479605  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:45:43.479630  391835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:45:43.500907  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:45:43.500934  391835 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:45:43.518012  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:45:43.518080  391835 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:45:43.532061  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:45:43.532138  391835 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:45:43.547334  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:45:43.547409  391835 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:45:43.571918  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:45:43.571945  391835 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:45:43.596323  391835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:45:45.332506  391835 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:45:45.332534  391835 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:45:45.332550  391835 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:45:45.345259  391835 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:45:45.346369  391835 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:45:45.408674  391835 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:45:45.421427  391835 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:45:45.421461  391835 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:45:45.908701  391835 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:45:45.914931  391835 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:45:45.914966  391835 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:45:46.145474  391835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.752692391s)
	I1018 09:45:46.145557  391835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.722738408s)
	I1018 09:45:46.145720  391835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.549352853s)
	I1018 09:45:46.148071  391835 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-708733 addons enable metrics-server
	
	I1018 09:45:46.158697  391835 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 09:45:46.160193  391835 addons.go:514] duration metric: took 2.959629061s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 09:45:46.408901  391835 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:45:46.414465  391835 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:45:46.414508  391835 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:45:46.908968  391835 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:45:46.914131  391835 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1018 09:45:46.915426  391835 api_server.go:141] control plane version: v1.34.1
	I1018 09:45:46.915454  391835 api_server.go:131] duration metric: took 3.507210399s to wait for apiserver health ...
	I1018 09:45:46.915464  391835 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:45:46.919169  391835 system_pods.go:59] 8 kube-system pods found
	I1018 09:45:46.919209  391835 system_pods.go:61] "coredns-66bc5c9577-pcqqp" [56bb81cf-dbf6-45cd-8398-91762e3ce4a0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:45:46.919223  391835 system_pods.go:61] "etcd-newest-cni-708733" [b25803cb-7959-4752-b0e3-7f80be73ac86] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:45:46.919230  391835 system_pods.go:61] "kindnet-z7dcb" [77bfd17c-f58c-418b-8e31-c2893c4a3647] Running
	I1018 09:45:46.919236  391835 system_pods.go:61] "kube-apiserver-newest-cni-708733" [846be6bb-a108-477e-9128-e8d6d2e396bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:45:46.919244  391835 system_pods.go:61] "kube-controller-manager-newest-cni-708733" [82bcfbf8-19ab-4fd7-856f-f7eb0d2e887b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:45:46.919251  391835 system_pods.go:61] "kube-proxy-nq79m" [7618e803-4e75-4661-ab8d-99195c316305] Running
	I1018 09:45:46.919257  391835 system_pods.go:61] "kube-scheduler-newest-cni-708733" [5d3ff5b3-f4aa-4f9f-a1ce-6bc323fa29dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:45:46.919263  391835 system_pods.go:61] "storage-provisioner" [930742e4-08ac-435f-8ae3-a6bbf9a76bcd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:45:46.919269  391835 system_pods.go:74] duration metric: took 3.799893ms to wait for pod list to return data ...
	I1018 09:45:46.919279  391835 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:45:46.921533  391835 default_sa.go:45] found service account: "default"
	I1018 09:45:46.921552  391835 default_sa.go:55] duration metric: took 2.267911ms for default service account to be created ...
	I1018 09:45:46.921563  391835 kubeadm.go:586] duration metric: took 3.721097004s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:45:46.921598  391835 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:45:46.923792  391835 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:45:46.923834  391835 node_conditions.go:123] node cpu capacity is 8
	I1018 09:45:46.923859  391835 node_conditions.go:105] duration metric: took 2.25193ms to run NodePressure ...
	I1018 09:45:46.923873  391835 start.go:241] waiting for startup goroutines ...
	I1018 09:45:46.923886  391835 start.go:246] waiting for cluster config update ...
	I1018 09:45:46.923900  391835 start.go:255] writing updated cluster config ...
	I1018 09:45:46.924119  391835 ssh_runner.go:195] Run: rm -f paused
	I1018 09:45:46.983400  391835 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:45:46.986152  391835 out.go:179] * Done! kubectl is now configured to use "newest-cni-708733" cluster and "default" namespace by default
	I1018 09:45:42.577400  391061 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:45:42.581993  391061 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:45:42.583021  391061 api_server.go:141] control plane version: v1.34.1
	I1018 09:45:42.583043  391061 api_server.go:131] duration metric: took 1.006747407s to wait for apiserver health ...
	I1018 09:45:42.583053  391061 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:45:42.586716  391061 system_pods.go:59] 8 kube-system pods found
	I1018 09:45:42.586760  391061 system_pods.go:61] "coredns-66bc5c9577-ksdf9" [ba2449a3-fc94-49e2-9e00-868003d349b1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:45:42.586776  391061 system_pods.go:61] "etcd-embed-certs-055175" [acbafcab-4332-412c-9a28-07d9c4f5d5d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:45:42.586799  391061 system_pods.go:61] "kindnet-tntfx" [f7f70a88-1903-43e5-a76f-2206c4e3df79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 09:45:42.586813  391061 system_pods.go:61] "kube-apiserver-embed-certs-055175" [106af05d-a1f9-4283-a87f-7ac003f31fc7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:45:42.586863  391061 system_pods.go:61] "kube-controller-manager-embed-certs-055175" [c2698b20-d1d6-4737-a292-7eef1978c79b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:45:42.586878  391061 system_pods.go:61] "kube-proxy-9n98q" [5c9c0f79-f699-4305-8423-c0863f443b78] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 09:45:42.586889  391061 system_pods.go:61] "kube-scheduler-embed-certs-055175" [7b537fc9-c879-44cc-95e6-35fb0dcc566a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:45:42.586899  391061 system_pods.go:61] "storage-provisioner" [1d121276-430c-41af-a2b6-542d426c43dc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:45:42.586912  391061 system_pods.go:74] duration metric: took 3.851478ms to wait for pod list to return data ...
	I1018 09:45:42.586926  391061 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:45:42.589741  391061 default_sa.go:45] found service account: "default"
	I1018 09:45:42.589766  391061 default_sa.go:55] duration metric: took 2.832506ms for default service account to be created ...
	I1018 09:45:42.589785  391061 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:45:42.593436  391061 system_pods.go:86] 8 kube-system pods found
	I1018 09:45:42.593470  391061 system_pods.go:89] "coredns-66bc5c9577-ksdf9" [ba2449a3-fc94-49e2-9e00-868003d349b1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:45:42.593482  391061 system_pods.go:89] "etcd-embed-certs-055175" [acbafcab-4332-412c-9a28-07d9c4f5d5d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:45:42.593493  391061 system_pods.go:89] "kindnet-tntfx" [f7f70a88-1903-43e5-a76f-2206c4e3df79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 09:45:42.593501  391061 system_pods.go:89] "kube-apiserver-embed-certs-055175" [106af05d-a1f9-4283-a87f-7ac003f31fc7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:45:42.593516  391061 system_pods.go:89] "kube-controller-manager-embed-certs-055175" [c2698b20-d1d6-4737-a292-7eef1978c79b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:45:42.593528  391061 system_pods.go:89] "kube-proxy-9n98q" [5c9c0f79-f699-4305-8423-c0863f443b78] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 09:45:42.593539  391061 system_pods.go:89] "kube-scheduler-embed-certs-055175" [7b537fc9-c879-44cc-95e6-35fb0dcc566a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:45:42.593559  391061 system_pods.go:89] "storage-provisioner" [1d121276-430c-41af-a2b6-542d426c43dc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:45:42.593571  391061 system_pods.go:126] duration metric: took 3.778642ms to wait for k8s-apps to be running ...
	I1018 09:45:42.593589  391061 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:45:42.593628  391061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:45:42.607437  391061 system_svc.go:56] duration metric: took 13.83871ms WaitForService to wait for kubelet
	I1018 09:45:42.607463  391061 kubeadm.go:586] duration metric: took 3.274237526s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:45:42.607481  391061 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:45:42.610633  391061 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:45:42.610659  391061 node_conditions.go:123] node cpu capacity is 8
	I1018 09:45:42.610676  391061 node_conditions.go:105] duration metric: took 3.189324ms to run NodePressure ...
	I1018 09:45:42.610690  391061 start.go:241] waiting for startup goroutines ...
	I1018 09:45:42.610700  391061 start.go:246] waiting for cluster config update ...
	I1018 09:45:42.610711  391061 start.go:255] writing updated cluster config ...
	I1018 09:45:42.610989  391061 ssh_runner.go:195] Run: rm -f paused
	I1018 09:45:42.614869  391061 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:45:42.618204  391061 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ksdf9" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:45:44.625233  391061 pod_ready.go:104] pod "coredns-66bc5c9577-ksdf9" is not "Ready", error: <nil>
	W1018 09:45:46.625594  391061 pod_ready.go:104] pod "coredns-66bc5c9577-ksdf9" is not "Ready", error: <nil>
	I1018 09:45:43.888170  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:45:43.888770  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:45:43.888866  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:45:43.888962  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:45:43.924472  353123 cri.go:89] found id: "19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:45:43.924500  353123 cri.go:89] found id: ""
	I1018 09:45:43.924511  353123 logs.go:282] 1 containers: [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03]
	I1018 09:45:43.924573  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:43.929570  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:45:43.929636  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:45:43.965802  353123 cri.go:89] found id: ""
	I1018 09:45:43.965845  353123 logs.go:282] 0 containers: []
	W1018 09:45:43.965856  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:45:43.965864  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:45:43.965919  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:45:43.994915  353123 cri.go:89] found id: ""
	I1018 09:45:43.994951  353123 logs.go:282] 0 containers: []
	W1018 09:45:43.994966  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:45:43.994973  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:45:43.995035  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:45:44.024685  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:44.024712  353123 cri.go:89] found id: ""
	I1018 09:45:44.024724  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:45:44.024787  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:44.028840  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:45:44.028896  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:45:44.065751  353123 cri.go:89] found id: ""
	I1018 09:45:44.065782  353123 logs.go:282] 0 containers: []
	W1018 09:45:44.065793  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:45:44.065801  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:45:44.065914  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:45:44.106664  353123 cri.go:89] found id: "b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:45:44.106692  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:44.106698  353123 cri.go:89] found id: ""
	I1018 09:45:44.106714  353123 logs.go:282] 2 containers: [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:45:44.106775  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:44.114471  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:44.120400  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:45:44.120569  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:45:44.177487  353123 cri.go:89] found id: ""
	I1018 09:45:44.177515  353123 logs.go:282] 0 containers: []
	W1018 09:45:44.177660  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:45:44.177671  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:45:44.177853  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:45:44.218056  353123 cri.go:89] found id: ""
	I1018 09:45:44.218088  353123 logs.go:282] 0 containers: []
	W1018 09:45:44.218118  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:45:44.218140  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:44.218157  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:44.254064  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:45:44.254100  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:45:44.293814  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:44.293874  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:44.431097  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:45:44.431148  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:45:44.478946  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:44.478979  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:44.547989  353123 logs.go:123] Gathering logs for kube-controller-manager [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88] ...
	I1018 09:45:44.548022  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:45:44.586119  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:44.586155  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:45:44.659273  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:45:44.659309  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:45:44.683003  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:45:44.683044  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:45:44.773289  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:45:47.274933  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:45:47.275407  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:45:47.275469  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:45:47.275587  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:45:47.313368  353123 cri.go:89] found id: "19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:45:47.313396  353123 cri.go:89] found id: ""
	I1018 09:45:47.313407  353123 logs.go:282] 1 containers: [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03]
	I1018 09:45:47.313469  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:47.318875  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:45:47.318951  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:45:47.358305  353123 cri.go:89] found id: ""
	I1018 09:45:47.358331  353123 logs.go:282] 0 containers: []
	W1018 09:45:47.358340  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:45:47.358348  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:45:47.358411  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:45:47.394289  353123 cri.go:89] found id: ""
	I1018 09:45:47.394362  353123 logs.go:282] 0 containers: []
	W1018 09:45:47.394375  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:45:47.394383  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:45:47.394436  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:45:47.433806  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:47.433852  353123 cri.go:89] found id: ""
	I1018 09:45:47.433862  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:45:47.433917  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:47.438841  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:45:47.438906  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:45:47.467930  353123 cri.go:89] found id: ""
	I1018 09:45:47.467958  353123 logs.go:282] 0 containers: []
	W1018 09:45:47.467969  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:45:47.467976  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:45:47.468038  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:45:47.496921  353123 cri.go:89] found id: "b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:45:47.496943  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:47.496947  353123 cri.go:89] found id: ""
	I1018 09:45:47.496956  353123 logs.go:282] 2 containers: [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:45:47.497020  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:47.500969  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:47.504943  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:45:47.504993  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:45:47.532458  353123 cri.go:89] found id: ""
	I1018 09:45:47.532481  353123 logs.go:282] 0 containers: []
	W1018 09:45:47.532489  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:45:47.532494  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:45:47.532552  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:45:47.559397  353123 cri.go:89] found id: ""
	I1018 09:45:47.559424  353123 logs.go:282] 0 containers: []
	W1018 09:45:47.559434  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:45:47.559450  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:47.559465  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:47.588691  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:45:47.588721  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:45:47.623441  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:45:47.623468  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:45:47.661447  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:47.661474  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:47.725061  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:47.725096  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:45:47.791940  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:47.791979  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:47.887049  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:45:47.887085  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:45:47.905941  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:45:47.905973  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:45:47.975407  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:45:47.975430  353123 logs.go:123] Gathering logs for kube-controller-manager [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88] ...
	I1018 09:45:47.975447  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	
	
	==> CRI-O <==
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.783734507Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.787494572Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ab067c82-770c-476c-b7aa-76d9efeba3b0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.788324487Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=87365546-3e8e-48a8-9274-d477451dc0bc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.78937543Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.790260749Z" level=info msg="Ran pod sandbox 28ca068a3930bcb084d1710f344c27bc07ffa2f0458e3d32cac2472a30cac03a with infra container: kube-system/kindnet-z7dcb/POD" id=ab067c82-770c-476c-b7aa-76d9efeba3b0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.79162408Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=16ee9371-19a4-4d89-bd1e-4c16473444ec name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.79269715Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.794339384Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a48cd9d4-bf6c-486a-9f62-7de142417dcc name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.794557897Z" level=info msg="Ran pod sandbox bd1b4699edf639c51131476b9d49a57cd427cab5edb06b5c2091461a0411260f with infra container: kube-system/kube-proxy-nq79m/POD" id=87365546-3e8e-48a8-9274-d477451dc0bc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.796148126Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=3b78dce7-3ff6-4247-85fa-69a44328a796 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.796353076Z" level=info msg="Creating container: kube-system/kindnet-z7dcb/kindnet-cni" id=04437c9d-f8bb-4d21-aabd-aa6e738f5d06 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.797297417Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=113bd0e1-8006-4183-8a84-3ff0f7442f9d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.797626337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.798688654Z" level=info msg="Creating container: kube-system/kube-proxy-nq79m/kube-proxy" id=561d4e0f-c0d1-4227-920e-16cc907e869b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.80139987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.802525791Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.803197133Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.810702337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.811310016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.834476212Z" level=info msg="Created container 204965dc89584beb2735f7dcd8dd4fad9d6cd7d7794b97b5aebd873dae276d20: kube-system/kindnet-z7dcb/kindnet-cni" id=04437c9d-f8bb-4d21-aabd-aa6e738f5d06 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.836177134Z" level=info msg="Starting container: 204965dc89584beb2735f7dcd8dd4fad9d6cd7d7794b97b5aebd873dae276d20" id=431a92ba-383b-49d3-89be-0a2e4a4c16a6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.838793535Z" level=info msg="Started container" PID=1030 containerID=204965dc89584beb2735f7dcd8dd4fad9d6cd7d7794b97b5aebd873dae276d20 description=kube-system/kindnet-z7dcb/kindnet-cni id=431a92ba-383b-49d3-89be-0a2e4a4c16a6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=28ca068a3930bcb084d1710f344c27bc07ffa2f0458e3d32cac2472a30cac03a
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.84212974Z" level=info msg="Created container ed56304dada2b69f3a06fd413ad71a4d41081d77b43ef79dbd43751753447b68: kube-system/kube-proxy-nq79m/kube-proxy" id=561d4e0f-c0d1-4227-920e-16cc907e869b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.843478048Z" level=info msg="Starting container: ed56304dada2b69f3a06fd413ad71a4d41081d77b43ef79dbd43751753447b68" id=8a32dd68-5a14-4156-b104-eecea5f3283a name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.848688716Z" level=info msg="Started container" PID=1031 containerID=ed56304dada2b69f3a06fd413ad71a4d41081d77b43ef79dbd43751753447b68 description=kube-system/kube-proxy-nq79m/kube-proxy id=8a32dd68-5a14-4156-b104-eecea5f3283a name=/runtime.v1.RuntimeService/StartContainer sandboxID=bd1b4699edf639c51131476b9d49a57cd427cab5edb06b5c2091461a0411260f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ed56304dada2b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   bd1b4699edf63       kube-proxy-nq79m                            kube-system
	204965dc89584       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   28ca068a3930b       kindnet-z7dcb                               kube-system
	082f88526b1ee       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   2b606006c91ff       kube-scheduler-newest-cni-708733            kube-system
	ff767733a8265       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   f5aaa6430420a       kube-apiserver-newest-cni-708733            kube-system
	db7341eafc41c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   431f8bf25796e       etcd-newest-cni-708733                      kube-system
	4d31b12a89bd9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   311e1b37fb613       kube-controller-manager-newest-cni-708733   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-708733
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-708733
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=newest-cni-708733
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_45_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:45:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-708733
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:45:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:45:45 +0000   Sat, 18 Oct 2025 09:45:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:45:45 +0000   Sat, 18 Oct 2025 09:45:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:45:45 +0000   Sat, 18 Oct 2025 09:45:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 09:45:45 +0000   Sat, 18 Oct 2025 09:45:09 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-708733
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                b382c5a4-fd22-47f3-b8a6-fb04181833ca
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-708733                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         37s
	  kube-system                 kindnet-z7dcb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-newest-cni-708733             250m (3%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-newest-cni-708733    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-nq79m                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-newest-cni-708733             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 30s   kube-proxy       
	  Normal  Starting                 4s    kube-proxy       
	  Normal  Starting                 37s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s   kubelet          Node newest-cni-708733 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s   kubelet          Node newest-cni-708733 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s   kubelet          Node newest-cni-708733 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           32s   node-controller  Node newest-cni-708733 event: Registered Node newest-cni-708733 in Controller
	  Normal  RegisteredNode           2s    node-controller  Node newest-cni-708733 event: Registered Node newest-cni-708733 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [db7341eafc41ce4a3c0819db1d63993ca69b231784785233e8f6cde3e77357be] <==
	{"level":"warn","ts":"2025-10-18T09:45:44.467619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.484669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.499772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.502771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.510071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.517777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.524623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.530599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.538629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.545604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.552274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.560054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.567753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.576209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.584396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.592864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.600208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.608813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.616698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.625335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.632531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.648161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.656010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.663936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.735964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40190","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:45:51 up  1:28,  0 user,  load average: 2.50, 2.78, 1.87
	Linux newest-cni-708733 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [204965dc89584beb2735f7dcd8dd4fad9d6cd7d7794b97b5aebd873dae276d20] <==
	I1018 09:45:46.094214       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:45:46.094514       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 09:45:46.094631       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:45:46.094650       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:45:46.094668       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:45:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:45:46.297805       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:45:46.297906       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:45:46.297940       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:45:46.298069       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:45:46.690653       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:45:46.690711       1 metrics.go:72] Registering metrics
	I1018 09:45:46.690795       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [ff767733a82659e64d11546f810903a05069bae143f380f9bd6fbe22ce1533d9] <==
	I1018 09:45:45.401176       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 09:45:45.405396       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:45:45.405411       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:45:45.405417       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:45:45.405424       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:45:45.401164       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:45:45.401197       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 09:45:45.405879       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:45:45.409985       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 09:45:45.414937       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:45:45.434108       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:45:45.436028       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:45:45.441039       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:45:45.652354       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:45:45.854551       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:45:45.895674       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:45:45.925756       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:45:45.935555       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:45:46.006136       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.150.235"}
	I1018 09:45:46.026767       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.197.188"}
	I1018 09:45:46.308187       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:45:48.732461       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:45:49.033408       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:45:49.033408       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:45:49.183744       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4d31b12a89bd95e764894ba0f4011abf85b4b249605f6abee6c147e7b546795d] <==
	I1018 09:45:48.726660       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:45:48.729021       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:45:48.729111       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:45:48.729123       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:45:48.729420       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:45:48.729521       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:45:48.729532       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 09:45:48.730330       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:45:48.731510       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:45:48.731600       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:45:48.732784       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 09:45:48.733654       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:45:48.734372       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:45:48.735780       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:45:48.737222       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:45:48.737324       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 09:45:48.737771       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 09:45:48.739077       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:45:48.741307       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:45:48.741405       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:45:48.744600       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:45:48.772924       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:45:48.783929       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:45:48.783950       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:45:48.783960       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [ed56304dada2b69f3a06fd413ad71a4d41081d77b43ef79dbd43751753447b68] <==
	I1018 09:45:45.898411       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:45:45.975123       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:45:46.077039       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:45:46.077081       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1018 09:45:46.077188       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:45:46.106221       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:45:46.106379       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:45:46.113640       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:45:46.114237       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:45:46.114396       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:45:46.117272       1 config.go:200] "Starting service config controller"
	I1018 09:45:46.117379       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:45:46.117569       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:45:46.117612       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:45:46.117679       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:45:46.118517       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:45:46.117965       1 config.go:309] "Starting node config controller"
	I1018 09:45:46.118588       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:45:46.118601       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:45:46.218270       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:45:46.218293       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:45:46.219479       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [082f88526b1eea8a63828944e82308647c71102eb90a2a5ed82d5d2512799fce] <==
	I1018 09:45:43.800477       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:45:45.318595       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:45:45.318641       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:45:45.318653       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:45:45.318663       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:45:45.355676       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:45:45.355711       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:45:45.359577       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:45:45.359615       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:45:45.360027       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:45:45.360163       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:45:45.462310       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:45:44 newest-cni-708733 kubelet[660]: E1018 09:45:44.515171     660 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-708733\" not found" node="newest-cni-708733"
	Oct 18 09:45:44 newest-cni-708733 kubelet[660]: E1018 09:45:44.515274     660 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-708733\" not found" node="newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.372894     660 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: E1018 09:45:45.397242     660 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-708733\" already exists" pod="kube-system/kube-scheduler-newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.397403     660 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: E1018 09:45:45.418149     660 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-708733\" already exists" pod="kube-system/etcd-newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.418370     660 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: E1018 09:45:45.431339     660 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-708733\" already exists" pod="kube-system/kube-apiserver-newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.431421     660 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: E1018 09:45:45.460864     660 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-708733\" already exists" pod="kube-system/kube-controller-manager-newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.461310     660 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.461418     660 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.461463     660 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.463602     660 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.467740     660 apiserver.go:52] "Watching apiserver"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.571670     660 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.646251     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7618e803-4e75-4661-ab8d-99195c316305-lib-modules\") pod \"kube-proxy-nq79m\" (UID: \"7618e803-4e75-4661-ab8d-99195c316305\") " pod="kube-system/kube-proxy-nq79m"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.646314     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7618e803-4e75-4661-ab8d-99195c316305-xtables-lock\") pod \"kube-proxy-nq79m\" (UID: \"7618e803-4e75-4661-ab8d-99195c316305\") " pod="kube-system/kube-proxy-nq79m"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.646407     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77bfd17c-f58c-418b-8e31-c2893c4a3647-lib-modules\") pod \"kindnet-z7dcb\" (UID: \"77bfd17c-f58c-418b-8e31-c2893c4a3647\") " pod="kube-system/kindnet-z7dcb"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.646443     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77bfd17c-f58c-418b-8e31-c2893c4a3647-xtables-lock\") pod \"kindnet-z7dcb\" (UID: \"77bfd17c-f58c-418b-8e31-c2893c4a3647\") " pod="kube-system/kindnet-z7dcb"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.646468     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/77bfd17c-f58c-418b-8e31-c2893c4a3647-cni-cfg\") pod \"kindnet-z7dcb\" (UID: \"77bfd17c-f58c-418b-8e31-c2893c4a3647\") " pod="kube-system/kindnet-z7dcb"
	Oct 18 09:45:48 newest-cni-708733 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:45:48 newest-cni-708733 kubelet[660]: I1018 09:45:48.017565     660 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 18 09:45:48 newest-cni-708733 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:45:48 newest-cni-708733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-708733 -n newest-cni-708733
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-708733 -n newest-cni-708733: exit status 2 (329.151085ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-708733 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-pcqqp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-c8n2g kubernetes-dashboard-855c9754f9-5bx7r
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-708733 describe pod coredns-66bc5c9577-pcqqp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-c8n2g kubernetes-dashboard-855c9754f9-5bx7r
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-708733 describe pod coredns-66bc5c9577-pcqqp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-c8n2g kubernetes-dashboard-855c9754f9-5bx7r: exit status 1 (59.911196ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-pcqqp" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-c8n2g" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-5bx7r" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-708733 describe pod coredns-66bc5c9577-pcqqp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-c8n2g kubernetes-dashboard-855c9754f9-5bx7r: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-708733
helpers_test.go:243: (dbg) docker inspect newest-cni-708733:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475",
	        "Created": "2025-10-18T09:44:58.376755553Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 392060,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:45:35.839912456Z",
	            "FinishedAt": "2025-10-18T09:45:35.036264253Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475/hostname",
	        "HostsPath": "/var/lib/docker/containers/589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475/hosts",
	        "LogPath": "/var/lib/docker/containers/589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475/589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475-json.log",
	        "Name": "/newest-cni-708733",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-708733:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-708733",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "589c5abc3ddac56c8197a6d1ddc9ebf3c85774aca5747e4f8760f46be595f475",
	                "LowerDir": "/var/lib/docker/overlay2/ffa4278b5a07c15f6b720f56ebd3b4e4ff8e546aecc435abdd6e9191da7093aa-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ffa4278b5a07c15f6b720f56ebd3b4e4ff8e546aecc435abdd6e9191da7093aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ffa4278b5a07c15f6b720f56ebd3b4e4ff8e546aecc435abdd6e9191da7093aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ffa4278b5a07c15f6b720f56ebd3b4e4ff8e546aecc435abdd6e9191da7093aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-708733",
	                "Source": "/var/lib/docker/volumes/newest-cni-708733/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-708733",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-708733",
	                "name.minikube.sigs.k8s.io": "newest-cni-708733",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "92b19aa89f60f57ca70370f1e3221723209f4ebdf217098a82c0c9b5059ae9b7",
	            "SandboxKey": "/var/run/docker/netns/92b19aa89f60",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33223"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33224"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33227"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33225"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33226"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-708733": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:bc:0b:1b:4b:70",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1aaffc18dfa2904bed47c15aa8ec5d5036ec16333dc17a28b2beac767bfe6ebf",
	                    "EndpointID": "5a5be5fb230dbc668317223daaa59feae514027529df9380695c66e8c2376ff5",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-708733",
	                        "589c5abc3dda"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708733 -n newest-cni-708733
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708733 -n newest-cni-708733: exit status 2 (306.163087ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-708733 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-619885 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ delete  │ -p old-k8s-version-619885                                                                                                                                                                                                                     │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p old-k8s-version-619885                                                                                                                                                                                                                     │ old-k8s-version-619885       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ start   │ -p embed-certs-055175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:45 UTC │
	│ image   │ no-preload-589869 image list --format=json                                                                                                                                                                                                    │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ pause   │ -p no-preload-589869 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │                     │
	│ start   │ -p cert-expiration-650496 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-650496       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p no-preload-589869                                                                                                                                                                                                                          │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p cert-expiration-650496                                                                                                                                                                                                                     │ cert-expiration-650496       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p no-preload-589869                                                                                                                                                                                                                          │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p disable-driver-mounts-399936                                                                                                                                                                                                               │ disable-driver-mounts-399936 │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ start   │ -p default-k8s-diff-port-942905 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p newest-cni-708733 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable metrics-server -p embed-certs-055175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ stop    │ -p embed-certs-055175 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable metrics-server -p newest-cni-708733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ stop    │ -p newest-cni-708733 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable dashboard -p embed-certs-055175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p embed-certs-055175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-708733 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p newest-cni-708733 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-942905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-942905 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ image   │ newest-cni-708733 image list --format=json                                                                                                                                                                                                    │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ pause   │ -p newest-cni-708733 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:45:35
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:45:35.612113  391835 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:45:35.612372  391835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:45:35.612384  391835 out.go:374] Setting ErrFile to fd 2...
	I1018 09:45:35.612390  391835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:45:35.612627  391835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:45:35.613204  391835 out.go:368] Setting JSON to false
	I1018 09:45:35.614405  391835 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5280,"bootTime":1760775456,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:45:35.614495  391835 start.go:141] virtualization: kvm guest
	I1018 09:45:35.616488  391835 out.go:179] * [newest-cni-708733] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:45:35.617763  391835 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:45:35.617765  391835 notify.go:220] Checking for updates...
	I1018 09:45:35.619047  391835 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:45:35.620517  391835 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:45:35.621508  391835 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:45:35.622619  391835 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:45:35.623653  391835 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:45:35.625265  391835 config.go:182] Loaded profile config "newest-cni-708733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:35.625773  391835 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:45:35.648625  391835 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:45:35.648730  391835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:45:35.707534  391835 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 09:45:35.696960967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:45:35.707701  391835 docker.go:318] overlay module found
	I1018 09:45:35.710095  391835 out.go:179] * Using the docker driver based on existing profile
	I1018 09:45:35.711170  391835 start.go:305] selected driver: docker
	I1018 09:45:35.711185  391835 start.go:925] validating driver "docker" against &{Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:45:35.711263  391835 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:45:35.711899  391835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:45:35.766563  391835 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-18 09:45:35.756934982 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:45:35.766911  391835 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:45:35.766941  391835 cni.go:84] Creating CNI manager for ""
	I1018 09:45:35.767009  391835 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:45:35.767062  391835 start.go:349] cluster config:
	{Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:45:35.768913  391835 out.go:179] * Starting "newest-cni-708733" primary control-plane node in "newest-cni-708733" cluster
	I1018 09:45:35.770258  391835 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:45:35.771551  391835 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:45:35.772648  391835 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:45:35.772696  391835 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:45:35.772707  391835 cache.go:58] Caching tarball of preloaded images
	I1018 09:45:35.772786  391835 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:45:35.772907  391835 preload.go:233] Found /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:45:35.772988  391835 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:45:35.773146  391835 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/config.json ...
	I1018 09:45:35.793193  391835 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:45:35.793211  391835 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:45:35.793226  391835 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:45:35.793247  391835 start.go:360] acquireMachinesLock for newest-cni-708733: {Name:mkb1aaee475623ac79c9cbc5f8d5e2dda34020d2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:45:35.793300  391835 start.go:364] duration metric: took 36.906µs to acquireMachinesLock for "newest-cni-708733"
	I1018 09:45:35.793316  391835 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:45:35.793321  391835 fix.go:54] fixHost starting: 
	I1018 09:45:35.793514  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:35.810764  391835 fix.go:112] recreateIfNeeded on newest-cni-708733: state=Stopped err=<nil>
	W1018 09:45:35.810808  391835 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:45:32.487875  391061 out.go:252] * Restarting existing docker container for "embed-certs-055175" ...
	I1018 09:45:32.487930  391061 cli_runner.go:164] Run: docker start embed-certs-055175
	I1018 09:45:32.746738  391061 cli_runner.go:164] Run: docker container inspect embed-certs-055175 --format={{.State.Status}}
	I1018 09:45:32.766565  391061 kic.go:430] container "embed-certs-055175" state is running.
	I1018 09:45:32.767066  391061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-055175
	I1018 09:45:32.787489  391061 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/config.json ...
	I1018 09:45:32.787761  391061 machine.go:93] provisionDockerMachine start ...
	I1018 09:45:32.787860  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:32.807525  391061 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:32.807763  391061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1018 09:45:32.807779  391061 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:45:32.808459  391061 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36084->127.0.0.1:33217: read: connection reset by peer
	I1018 09:45:35.951449  391061 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-055175
	
	I1018 09:45:35.951481  391061 ubuntu.go:182] provisioning hostname "embed-certs-055175"
	I1018 09:45:35.951567  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:35.970253  391061 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:35.970525  391061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1018 09:45:35.970577  391061 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-055175 && echo "embed-certs-055175" | sudo tee /etc/hostname
	I1018 09:45:36.120062  391061 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-055175
	
	I1018 09:45:36.120141  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:36.139369  391061 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:36.139660  391061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1018 09:45:36.139685  391061 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-055175' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-055175/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-055175' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:45:36.279283  391061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:45:36.279331  391061 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:45:36.279360  391061 ubuntu.go:190] setting up certificates
	I1018 09:45:36.279373  391061 provision.go:84] configureAuth start
	I1018 09:45:36.279436  391061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-055175
	I1018 09:45:36.301592  391061 provision.go:143] copyHostCerts
	I1018 09:45:36.301663  391061 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:45:36.301685  391061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:45:36.301767  391061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:45:36.301935  391061 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:45:36.301952  391061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:45:36.301999  391061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:45:36.302090  391061 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:45:36.302102  391061 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:45:36.302140  391061 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:45:36.302218  391061 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.embed-certs-055175 san=[127.0.0.1 192.168.76.2 embed-certs-055175 localhost minikube]
	I1018 09:45:36.521938  391061 provision.go:177] copyRemoteCerts
	I1018 09:45:36.522007  391061 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:45:36.522049  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:36.539806  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:36.638382  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:45:36.656542  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1018 09:45:36.674914  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:45:36.692375  391061 provision.go:87] duration metric: took 412.989421ms to configureAuth
	I1018 09:45:36.692399  391061 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:45:36.692583  391061 config.go:182] Loaded profile config "embed-certs-055175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:36.692696  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:36.711813  391061 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:36.712122  391061 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1018 09:45:36.712145  391061 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:45:36.996777  391061 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:45:36.996808  391061 machine.go:96] duration metric: took 4.209028137s to provisionDockerMachine
	I1018 09:45:36.996838  391061 start.go:293] postStartSetup for "embed-certs-055175" (driver="docker")
	I1018 09:45:36.996853  391061 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:45:36.996924  391061 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:45:36.996992  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:37.015643  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:37.112419  391061 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:45:37.115866  391061 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:45:37.115892  391061 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:45:37.115901  391061 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:45:37.115940  391061 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:45:37.116006  391061 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:45:37.116105  391061 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:45:37.123537  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:37.140936  391061 start.go:296] duration metric: took 144.080164ms for postStartSetup
	I1018 09:45:37.141011  391061 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:45:37.141113  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:37.158840  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:37.254266  391061 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:45:37.258887  391061 fix.go:56] duration metric: took 4.791318273s for fixHost
	I1018 09:45:37.258913  391061 start.go:83] releasing machines lock for "embed-certs-055175", held for 4.791367111s
	I1018 09:45:37.258983  391061 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-055175
	I1018 09:45:37.276795  391061 ssh_runner.go:195] Run: cat /version.json
	I1018 09:45:37.276844  391061 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:45:37.276893  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:37.276895  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:37.295580  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:37.295867  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:37.442421  391061 ssh_runner.go:195] Run: systemctl --version
	I1018 09:45:37.449145  391061 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:45:37.485446  391061 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:45:37.490286  391061 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:45:37.490344  391061 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:45:37.498440  391061 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:45:37.498462  391061 start.go:495] detecting cgroup driver to use...
	I1018 09:45:37.498498  391061 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:45:37.498541  391061 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:45:37.512575  391061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:45:37.524383  391061 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:45:37.524431  391061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:45:37.538338  391061 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:45:37.550505  391061 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:45:37.630207  391061 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:45:37.707104  391061 docker.go:234] disabling docker service ...
	I1018 09:45:37.707165  391061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:45:37.721802  391061 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:45:37.734681  391061 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:45:37.810403  391061 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:45:37.892105  391061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:45:37.904421  391061 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:45:37.918908  391061 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:45:37.919002  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.927975  391061 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:45:37.928025  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.937739  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.946621  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.955765  391061 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:45:37.963854  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.972623  391061 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.981215  391061 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:37.990025  391061 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:45:37.997012  391061 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:45:38.004111  391061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:38.083139  391061 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:45:38.194280  391061 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:45:38.194350  391061 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:45:38.198391  391061 start.go:563] Will wait 60s for crictl version
	I1018 09:45:38.198444  391061 ssh_runner.go:195] Run: which crictl
	I1018 09:45:38.202260  391061 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:45:38.226451  391061 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:45:38.226528  391061 ssh_runner.go:195] Run: crio --version
	I1018 09:45:38.255560  391061 ssh_runner.go:195] Run: crio --version
	I1018 09:45:38.285154  391061 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:45:36.049688  353123 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.056359588s)
	W1018 09:45:36.049730  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1018 09:45:36.049740  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:36.049755  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:36.082656  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:36.082690  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:36.185625  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:45:36.185657  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:45:36.223015  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:45:36.223045  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:36.257875  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:36.257910  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:36.320259  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:36.320290  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:45:38.286182  391061 cli_runner.go:164] Run: docker network inspect embed-certs-055175 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:45:38.303601  391061 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1018 09:45:38.307969  391061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:38.318414  391061 kubeadm.go:883] updating cluster {Name:embed-certs-055175 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-055175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:45:38.318562  391061 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:45:38.318621  391061 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:38.351678  391061 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:38.351700  391061 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:45:38.351743  391061 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:38.376983  391061 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:38.377006  391061 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:45:38.377014  391061 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1018 09:45:38.377106  391061 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-055175 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-055175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:45:38.377172  391061 ssh_runner.go:195] Run: crio config
	I1018 09:45:38.422001  391061 cni.go:84] Creating CNI manager for ""
	I1018 09:45:38.422023  391061 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:45:38.422042  391061 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:45:38.422063  391061 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-055175 NodeName:embed-certs-055175 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:45:38.422186  391061 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-055175"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:45:38.422240  391061 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:45:38.430216  391061 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:45:38.430276  391061 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:45:38.438081  391061 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:45:38.450317  391061 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:45:38.462520  391061 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:45:38.474657  391061 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:45:38.478282  391061 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:38.488221  391061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:38.566896  391061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:45:38.591111  391061 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175 for IP: 192.168.76.2
	I1018 09:45:38.591138  391061 certs.go:195] generating shared ca certs ...
	I1018 09:45:38.591161  391061 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:38.591310  391061 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:45:38.591384  391061 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:45:38.591402  391061 certs.go:257] generating profile certs ...
	I1018 09:45:38.591504  391061 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/client.key
	I1018 09:45:38.591598  391061 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/apiserver.key.d17ebb9e
	I1018 09:45:38.591678  391061 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/proxy-client.key
	I1018 09:45:38.591811  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:45:38.591882  391061 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:45:38.591896  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:45:38.591930  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:45:38.591966  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:45:38.591999  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:45:38.592055  391061 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:38.592628  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:45:38.611514  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:45:38.630402  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:45:38.649635  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:45:38.673181  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1018 09:45:38.692242  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1018 09:45:38.709954  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:45:38.728001  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/embed-certs-055175/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:45:38.745902  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:45:38.763592  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:45:38.781470  391061 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:45:38.799868  391061 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:45:38.812542  391061 ssh_runner.go:195] Run: openssl version
	I1018 09:45:38.818721  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:45:38.827249  391061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:45:38.831071  391061 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:45:38.831126  391061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:45:38.867725  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:45:38.876160  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:45:38.884525  391061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:45:38.888219  391061 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:45:38.888264  391061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:45:38.922467  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:45:38.930945  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:45:38.939990  391061 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:38.943700  391061 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:38.943757  391061 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:38.978998  391061 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:45:38.987211  391061 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:45:38.991075  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:45:39.025412  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:45:39.059499  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:45:39.101020  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:45:39.146140  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:45:39.199431  391061 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:45:39.253543  391061 kubeadm.go:400] StartCluster: {Name:embed-certs-055175 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-055175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:45:39.253654  391061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:45:39.253726  391061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:45:39.287480  391061 cri.go:89] found id: "82544c7a6f005e8210b717c0d6b26b29c0f08fbb6fdae3002ce9107b8c924a75"
	I1018 09:45:39.287507  391061 cri.go:89] found id: "d8a28c141ac160edf272ee9ddf9b12c1548c335130b27e90935ec06e6a60642f"
	I1018 09:45:39.287514  391061 cri.go:89] found id: "f427b03ed9ce5db5e74d4069d94ff8948e7da816ad588e3c12fbdd22293cab0d"
	I1018 09:45:39.287518  391061 cri.go:89] found id: "0aa95bb2edd15a27792b1a5bd689c87bd9d233036c37d823aff945a5fee1dc2d"
	I1018 09:45:39.287523  391061 cri.go:89] found id: ""
	I1018 09:45:39.287581  391061 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:45:39.301644  391061 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:45:39Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:45:39.301714  391061 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:45:39.310767  391061 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:45:39.310787  391061 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:45:39.310879  391061 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:45:39.319011  391061 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:45:39.319811  391061 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-055175" does not appear in /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:45:39.320288  391061 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-131066/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-055175" cluster setting kubeconfig missing "embed-certs-055175" context setting]
	I1018 09:45:39.321074  391061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:39.322981  391061 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:45:39.330833  391061 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1018 09:45:39.330865  391061 kubeadm.go:601] duration metric: took 20.071828ms to restartPrimaryControlPlane
	I1018 09:45:39.330874  391061 kubeadm.go:402] duration metric: took 77.343946ms to StartCluster
	I1018 09:45:39.330893  391061 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:39.330969  391061 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:45:39.332950  391061 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:39.333199  391061 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:45:39.333382  391061 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:45:39.333486  391061 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-055175"
	I1018 09:45:39.333505  391061 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-055175"
	W1018 09:45:39.333518  391061 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:45:39.333527  391061 config.go:182] Loaded profile config "embed-certs-055175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:39.333540  391061 addons.go:69] Setting dashboard=true in profile "embed-certs-055175"
	I1018 09:45:39.333583  391061 addons.go:238] Setting addon dashboard=true in "embed-certs-055175"
	W1018 09:45:39.333594  391061 addons.go:247] addon dashboard should already be in state true
	I1018 09:45:39.333598  391061 host.go:66] Checking if "embed-certs-055175" exists ...
	I1018 09:45:39.333601  391061 addons.go:69] Setting default-storageclass=true in profile "embed-certs-055175"
	I1018 09:45:39.333631  391061 host.go:66] Checking if "embed-certs-055175" exists ...
	I1018 09:45:39.333630  391061 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-055175"
	I1018 09:45:39.334122  391061 cli_runner.go:164] Run: docker container inspect embed-certs-055175 --format={{.State.Status}}
	I1018 09:45:39.334143  391061 cli_runner.go:164] Run: docker container inspect embed-certs-055175 --format={{.State.Status}}
	I1018 09:45:39.334172  391061 cli_runner.go:164] Run: docker container inspect embed-certs-055175 --format={{.State.Status}}
	I1018 09:45:39.335198  391061 out.go:179] * Verifying Kubernetes components...
	I1018 09:45:39.336588  391061 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:39.364419  391061 addons.go:238] Setting addon default-storageclass=true in "embed-certs-055175"
	W1018 09:45:39.364441  391061 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:45:39.364467  391061 host.go:66] Checking if "embed-certs-055175" exists ...
	I1018 09:45:39.364941  391061 cli_runner.go:164] Run: docker container inspect embed-certs-055175 --format={{.State.Status}}
	I1018 09:45:39.365279  391061 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:45:39.365348  391061 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:45:39.366461  391061 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:45:39.366483  391061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:45:39.366536  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:39.369244  391061 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:45:35.812543  391835 out.go:252] * Restarting existing docker container for "newest-cni-708733" ...
	I1018 09:45:35.812620  391835 cli_runner.go:164] Run: docker start newest-cni-708733
	I1018 09:45:36.066412  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:36.087638  391835 kic.go:430] container "newest-cni-708733" state is running.
	I1018 09:45:36.088075  391835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-708733
	I1018 09:45:36.108867  391835 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/config.json ...
	I1018 09:45:36.109119  391835 machine.go:93] provisionDockerMachine start ...
	I1018 09:45:36.109186  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:36.129372  391835 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:36.129746  391835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1018 09:45:36.129764  391835 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:45:36.130410  391835 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56176->127.0.0.1:33223: read: connection reset by peer
	I1018 09:45:39.281604  391835 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-708733
	
	I1018 09:45:39.281635  391835 ubuntu.go:182] provisioning hostname "newest-cni-708733"
	I1018 09:45:39.281704  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:39.304537  391835 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:39.304897  391835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1018 09:45:39.304921  391835 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-708733 && echo "newest-cni-708733" | sudo tee /etc/hostname
	I1018 09:45:39.471607  391835 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-708733
	
	I1018 09:45:39.471684  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:39.493328  391835 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:39.493535  391835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1018 09:45:39.493548  391835 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-708733' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-708733/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-708733' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:45:39.648618  391835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:45:39.648648  391835 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:45:39.648670  391835 ubuntu.go:190] setting up certificates
	I1018 09:45:39.648683  391835 provision.go:84] configureAuth start
	I1018 09:45:39.648740  391835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-708733
	I1018 09:45:39.671912  391835 provision.go:143] copyHostCerts
	I1018 09:45:39.671977  391835 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:45:39.672067  391835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:45:39.672162  391835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:45:39.672259  391835 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:45:39.672269  391835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:45:39.672296  391835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:45:39.672348  391835 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:45:39.672358  391835 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:45:39.672380  391835 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:45:39.672424  391835 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.newest-cni-708733 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-708733]
	I1018 09:45:39.936585  391835 provision.go:177] copyRemoteCerts
	I1018 09:45:39.936652  391835 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:45:39.936752  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:39.959548  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:40.065365  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:45:40.086638  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:45:40.108607  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1018 09:45:40.132253  391835 provision.go:87] duration metric: took 483.553625ms to configureAuth
	I1018 09:45:40.132292  391835 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:45:40.132527  391835 config.go:182] Loaded profile config "newest-cni-708733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:40.132665  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:40.153078  391835 main.go:141] libmachine: Using SSH client type: native
	I1018 09:45:40.153352  391835 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33223 <nil> <nil>}
	I1018 09:45:40.153370  391835 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:45:40.448100  391835 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:45:40.448132  391835 machine.go:96] duration metric: took 4.33899731s to provisionDockerMachine
	I1018 09:45:40.448147  391835 start.go:293] postStartSetup for "newest-cni-708733" (driver="docker")
	I1018 09:45:40.448162  391835 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:45:40.448233  391835 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:45:40.448284  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:40.474620  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:40.577567  391835 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:45:40.582063  391835 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:45:40.582097  391835 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:45:40.582110  391835 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:45:40.582160  391835 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:45:40.582267  391835 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:45:40.582402  391835 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:45:40.591516  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:39.370168  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:45:39.370188  391061 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:45:39.370247  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:39.400814  391061 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:45:39.400915  391061 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:45:39.400996  391061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:45:39.405011  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:39.407383  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:39.425670  391061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:45:39.505286  391061 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:45:39.520778  391061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:45:39.523155  391061 node_ready.go:35] waiting up to 6m0s for node "embed-certs-055175" to be "Ready" ...
	I1018 09:45:39.523608  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:45:39.523631  391061 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:45:39.538779  391061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:45:39.539364  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:45:39.539438  391061 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:45:39.560150  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:45:39.560179  391061 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:45:39.581867  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:45:39.581933  391061 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:45:39.596852  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:45:39.596884  391061 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:45:39.612014  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:45:39.612039  391061 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:45:39.626575  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:45:39.626600  391061 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:45:39.639500  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:45:39.639525  391061 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:45:39.654074  391061 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:45:39.654098  391061 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:45:39.670286  391061 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:45:40.899341  391061 node_ready.go:49] node "embed-certs-055175" is "Ready"
	I1018 09:45:40.899374  391061 node_ready.go:38] duration metric: took 1.376176965s for node "embed-certs-055175" to be "Ready" ...
	I1018 09:45:40.899390  391061 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:45:40.899443  391061 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:45:41.576093  391061 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.055278034s)
	I1018 09:45:41.576162  391061 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.037341616s)
	I1018 09:45:41.576238  391061 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.905916583s)
	I1018 09:45:41.576274  391061 api_server.go:72] duration metric: took 2.24304532s to wait for apiserver process to appear ...
	I1018 09:45:41.576289  391061 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:45:41.576309  391061 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:45:41.578020  391061 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-055175 addons enable metrics-server
	
	I1018 09:45:41.582881  391061 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:45:41.582904  391061 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:45:41.589049  391061 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 09:45:40.621550  391835 start.go:296] duration metric: took 173.38515ms for postStartSetup
	I1018 09:45:40.621639  391835 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:45:40.621684  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:40.643288  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:40.745039  391835 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:45:40.750133  391835 fix.go:56] duration metric: took 4.956803913s for fixHost
	I1018 09:45:40.750167  391835 start.go:83] releasing machines lock for "newest-cni-708733", held for 4.95685606s
	I1018 09:45:40.750236  391835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-708733
	I1018 09:45:40.781167  391835 ssh_runner.go:195] Run: cat /version.json
	I1018 09:45:40.781292  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:40.781186  391835 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:45:40.781618  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:40.812063  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:40.813770  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:40.940361  391835 ssh_runner.go:195] Run: systemctl --version
	I1018 09:45:41.006764  391835 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:45:41.061782  391835 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:45:41.068085  391835 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:45:41.068161  391835 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:45:41.078354  391835 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:45:41.078379  391835 start.go:495] detecting cgroup driver to use...
	I1018 09:45:41.078424  391835 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:45:41.078467  391835 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:45:41.098853  391835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:45:41.116027  391835 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:45:41.116089  391835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:45:41.133582  391835 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:45:41.150108  391835 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:45:41.258784  391835 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:45:41.365493  391835 docker.go:234] disabling docker service ...
	I1018 09:45:41.365568  391835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:45:41.389182  391835 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:45:41.405299  391835 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:45:41.512499  391835 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:45:41.597024  391835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:45:41.609959  391835 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:45:41.624662  391835 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:45:41.624735  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.634047  391835 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:45:41.634099  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.643165  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.652394  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.663256  391835 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:45:41.672317  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.684071  391835 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.694058  391835 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:45:41.705032  391835 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:45:41.715244  391835 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:45:41.725978  391835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:41.812310  391835 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:45:41.928316  391835 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:45:41.928398  391835 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:45:41.933300  391835 start.go:563] Will wait 60s for crictl version
	I1018 09:45:41.933375  391835 ssh_runner.go:195] Run: which crictl
	I1018 09:45:41.937695  391835 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:45:41.968232  391835 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:45:41.968322  391835 ssh_runner.go:195] Run: crio --version
	I1018 09:45:42.008722  391835 ssh_runner.go:195] Run: crio --version
	I1018 09:45:42.051058  391835 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:45:42.052454  391835 cli_runner.go:164] Run: docker network inspect newest-cni-708733 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:45:42.076948  391835 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 09:45:42.082993  391835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:42.098027  391835 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1018 09:45:41.590293  391061 addons.go:514] duration metric: took 2.256916495s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 09:45:42.076937  391061 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:45:42.081653  391061 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:45:42.081688  391061 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:45:42.099318  391835 kubeadm.go:883] updating cluster {Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:45:42.099457  391835 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:45:42.099596  391835 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:42.132475  391835 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:42.132500  391835 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:45:42.132566  391835 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:45:42.158774  391835 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:45:42.158804  391835 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:45:42.158815  391835 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 09:45:42.158983  391835 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-708733 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:45:42.159100  391835 ssh_runner.go:195] Run: crio config
	I1018 09:45:42.208450  391835 cni.go:84] Creating CNI manager for ""
	I1018 09:45:42.208480  391835 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:45:42.208500  391835 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1018 09:45:42.208539  391835 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-708733 NodeName:newest-cni-708733 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:45:42.208747  391835 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-708733"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:45:42.208839  391835 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:45:42.217704  391835 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:45:42.217771  391835 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:45:42.225608  391835 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1018 09:45:42.238980  391835 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:45:42.255680  391835 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1018 09:45:42.272042  391835 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:45:42.276501  391835 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:45:42.289252  391835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:42.374516  391835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:45:42.395343  391835 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733 for IP: 192.168.103.2
	I1018 09:45:42.395365  391835 certs.go:195] generating shared ca certs ...
	I1018 09:45:42.395386  391835 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:42.395555  391835 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:45:42.395633  391835 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:45:42.395649  391835 certs.go:257] generating profile certs ...
	I1018 09:45:42.395732  391835 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/client.key
	I1018 09:45:42.395806  391835 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key.ffa152cd
	I1018 09:45:42.395874  391835 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.key
	I1018 09:45:42.395977  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:45:42.396006  391835 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:45:42.396018  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:45:42.396049  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:45:42.396085  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:45:42.396116  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:45:42.396170  391835 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:45:42.396756  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:45:42.417067  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:45:42.439230  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:45:42.459862  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:45:42.484661  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1018 09:45:42.505965  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:45:42.524153  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:45:42.542892  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/newest-cni-708733/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:45:42.561246  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:45:42.579007  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:45:42.601111  391835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:45:42.619543  391835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:45:42.632771  391835 ssh_runner.go:195] Run: openssl version
	I1018 09:45:42.639054  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:45:42.648098  391835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:45:42.652060  391835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:45:42.652121  391835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:45:42.689227  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:45:42.698817  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:45:42.709921  391835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:42.715254  391835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:42.715316  391835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:45:42.758602  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:45:42.767388  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:45:42.776532  391835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:45:42.780462  391835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:45:42.780530  391835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:45:42.817681  391835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:45:42.826307  391835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:45:42.830455  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:45:42.868283  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:45:42.914730  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:45:42.969311  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:45:43.013486  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:45:43.072727  391835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:45:43.117083  391835 kubeadm.go:400] StartCluster: {Name:newest-cni-708733 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-708733 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:45:43.117198  391835 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:45:43.117268  391835 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:45:43.149877  391835 cri.go:89] found id: "082f88526b1eea8a63828944e82308647c71102eb90a2a5ed82d5d2512799fce"
	I1018 09:45:43.149897  391835 cri.go:89] found id: "ff767733a82659e64d11546f810903a05069bae143f380f9bd6fbe22ce1533d9"
	I1018 09:45:43.149902  391835 cri.go:89] found id: "db7341eafc41ce4a3c0819db1d63993ca69b231784785233e8f6cde3e77357be"
	I1018 09:45:43.149907  391835 cri.go:89] found id: "4d31b12a89bd95e764894ba0f4011abf85b4b249605f6abee6c147e7b546795d"
	I1018 09:45:43.149910  391835 cri.go:89] found id: ""
	I1018 09:45:43.149950  391835 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:45:43.164027  391835 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:45:43Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:45:43.164105  391835 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:45:43.173542  391835 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:45:43.173562  391835 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:45:43.173610  391835 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:45:43.183087  391835 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:45:43.184252  391835 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-708733" does not appear in /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:45:43.185121  391835 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-131066/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-708733" cluster setting kubeconfig missing "newest-cni-708733" context setting]
	I1018 09:45:43.186065  391835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:43.188016  391835 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:45:43.197622  391835 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1018 09:45:43.197652  391835 kubeadm.go:601] duration metric: took 24.083385ms to restartPrimaryControlPlane
	I1018 09:45:43.197662  391835 kubeadm.go:402] duration metric: took 80.590487ms to StartCluster
	I1018 09:45:43.197680  391835 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:43.197747  391835 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:45:43.200187  391835 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:45:43.200440  391835 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:45:43.200573  391835 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:45:43.200694  391835 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-708733"
	I1018 09:45:43.200697  391835 config.go:182] Loaded profile config "newest-cni-708733": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:45:43.200716  391835 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-708733"
	W1018 09:45:43.200724  391835 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:45:43.200723  391835 addons.go:69] Setting dashboard=true in profile "newest-cni-708733"
	I1018 09:45:43.200740  391835 addons.go:69] Setting default-storageclass=true in profile "newest-cni-708733"
	I1018 09:45:43.200755  391835 host.go:66] Checking if "newest-cni-708733" exists ...
	I1018 09:45:43.200765  391835 addons.go:238] Setting addon dashboard=true in "newest-cni-708733"
	I1018 09:45:43.200767  391835 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-708733"
	W1018 09:45:43.200775  391835 addons.go:247] addon dashboard should already be in state true
	I1018 09:45:43.200809  391835 host.go:66] Checking if "newest-cni-708733" exists ...
	I1018 09:45:43.201120  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:43.201273  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:43.201290  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:43.203194  391835 out.go:179] * Verifying Kubernetes components...
	I1018 09:45:43.205674  391835 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:45:43.230206  391835 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:45:43.230277  391835 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:45:43.231265  391835 addons.go:238] Setting addon default-storageclass=true in "newest-cni-708733"
	W1018 09:45:43.231300  391835 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:45:43.231412  391835 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:45:43.231426  391835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:45:43.231473  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:43.231666  391835 host.go:66] Checking if "newest-cni-708733" exists ...
	I1018 09:45:43.232269  391835 cli_runner.go:164] Run: docker container inspect newest-cni-708733 --format={{.State.Status}}
	I1018 09:45:43.232392  391835 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:45:38.888310  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:45:40.473062  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:55036->192.168.85.2:8443: read: connection reset by peer
	I1018 09:45:40.473131  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:45:40.473212  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:45:40.506845  353123 cri.go:89] found id: "19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:45:40.506916  353123 cri.go:89] found id: "064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	I1018 09:45:40.506931  353123 cri.go:89] found id: ""
	I1018 09:45:40.506946  353123 logs.go:282] 2 containers: [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]
	I1018 09:45:40.507011  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:40.511163  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:40.515230  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:45:40.515304  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:45:40.546337  353123 cri.go:89] found id: ""
	I1018 09:45:40.546363  353123 logs.go:282] 0 containers: []
	W1018 09:45:40.546373  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:45:40.546380  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:45:40.546439  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:45:40.576467  353123 cri.go:89] found id: ""
	I1018 09:45:40.576496  353123 logs.go:282] 0 containers: []
	W1018 09:45:40.576507  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:45:40.576515  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:45:40.576575  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:45:40.618939  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:40.618964  353123 cri.go:89] found id: ""
	I1018 09:45:40.618974  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:45:40.619033  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:40.623516  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:45:40.623599  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:45:40.659535  353123 cri.go:89] found id: ""
	I1018 09:45:40.659564  353123 logs.go:282] 0 containers: []
	W1018 09:45:40.659575  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:45:40.659606  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:45:40.659671  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:45:40.693235  353123 cri.go:89] found id: "b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:45:40.693264  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:40.693269  353123 cri.go:89] found id: ""
	I1018 09:45:40.693279  353123 logs.go:282] 2 containers: [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:45:40.693345  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:40.698191  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:40.702375  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:45:40.702453  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:45:40.740227  353123 cri.go:89] found id: ""
	I1018 09:45:40.740255  353123 logs.go:282] 0 containers: []
	W1018 09:45:40.740266  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:45:40.740281  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:45:40.740346  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:45:40.778699  353123 cri.go:89] found id: ""
	I1018 09:45:40.778725  353123 logs.go:282] 0 containers: []
	W1018 09:45:40.778736  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:45:40.778752  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:45:40.778767  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:45:40.832286  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:40.832323  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:40.985957  353123 logs.go:123] Gathering logs for kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d] ...
	I1018 09:45:40.986003  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	W1018 09:45:41.025599  353123 logs.go:130] failed kube-apiserver [064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d": Process exited with status 1
	stdout:
	
	stderr:
	E1018 09:45:41.021744    5929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d\": container with ID starting with 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d not found: ID does not exist" containerID="064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	time="2025-10-18T09:45:41Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d\": container with ID starting with 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d not found: ID does not exist"
	 output: 
	** stderr ** 
	E1018 09:45:41.021744    5929 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d\": container with ID starting with 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d not found: ID does not exist" containerID="064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d"
	time="2025-10-18T09:45:41Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d\": container with ID starting with 064edb845a5ad30a4b6bc4141e9923278cb90a0a95357bf0c92be5d2c6b65d9d not found: ID does not exist"
	
	** /stderr **
	I1018 09:45:41.025624  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:41.025640  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:41.093529  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:45:41.093584  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:45:41.122401  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:45:41.122440  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:45:41.207097  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:45:41.207126  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:45:41.207143  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:45:41.249695  353123 logs.go:123] Gathering logs for kube-controller-manager [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88] ...
	I1018 09:45:41.249733  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:45:41.281023  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:41.281062  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:41.321273  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:41.321315  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:45:43.233701  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:45:43.233733  391835 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:45:43.233795  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:43.268387  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:43.269203  391835 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:45:43.269219  391835 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:45:43.269275  391835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-708733
	I1018 09:45:43.274680  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:43.297168  391835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33223 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/newest-cni-708733/id_rsa Username:docker}
	I1018 09:45:43.370291  391835 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:45:43.386972  391835 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:45:43.387031  391835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:45:43.392747  391835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:45:43.403892  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:45:43.403918  391835 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:45:43.408215  391835 api_server.go:72] duration metric: took 207.741406ms to wait for apiserver process to appear ...
	I1018 09:45:43.408237  391835 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:45:43.408255  391835 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:45:43.422788  391835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:45:43.424491  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:45:43.424556  391835 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:45:43.453971  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:45:43.454064  391835 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:45:43.479605  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:45:43.479630  391835 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:45:43.500907  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:45:43.500934  391835 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:45:43.518012  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:45:43.518080  391835 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:45:43.532061  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:45:43.532138  391835 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:45:43.547334  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:45:43.547409  391835 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:45:43.571918  391835 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:45:43.571945  391835 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:45:43.596323  391835 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:45:45.332506  391835 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:45:45.332534  391835 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:45:45.332550  391835 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:45:45.345259  391835 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1018 09:45:45.346369  391835 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1018 09:45:45.408674  391835 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:45:45.421427  391835 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:45:45.421461  391835 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:45:45.908701  391835 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:45:45.914931  391835 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:45:45.914966  391835 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:45:46.145474  391835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.752692391s)
	I1018 09:45:46.145557  391835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.722738408s)
	I1018 09:45:46.145720  391835 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.549352853s)
	I1018 09:45:46.148071  391835 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-708733 addons enable metrics-server
	
	I1018 09:45:46.158697  391835 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1018 09:45:46.160193  391835 addons.go:514] duration metric: took 2.959629061s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 09:45:46.408901  391835 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:45:46.414465  391835 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:45:46.414508  391835 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:45:46.908968  391835 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:45:46.914131  391835 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1018 09:45:46.915426  391835 api_server.go:141] control plane version: v1.34.1
	I1018 09:45:46.915454  391835 api_server.go:131] duration metric: took 3.507210399s to wait for apiserver health ...
	I1018 09:45:46.915464  391835 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:45:46.919169  391835 system_pods.go:59] 8 kube-system pods found
	I1018 09:45:46.919209  391835 system_pods.go:61] "coredns-66bc5c9577-pcqqp" [56bb81cf-dbf6-45cd-8398-91762e3ce4a0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:45:46.919223  391835 system_pods.go:61] "etcd-newest-cni-708733" [b25803cb-7959-4752-b0e3-7f80be73ac86] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:45:46.919230  391835 system_pods.go:61] "kindnet-z7dcb" [77bfd17c-f58c-418b-8e31-c2893c4a3647] Running
	I1018 09:45:46.919236  391835 system_pods.go:61] "kube-apiserver-newest-cni-708733" [846be6bb-a108-477e-9128-e8d6d2e396bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:45:46.919244  391835 system_pods.go:61] "kube-controller-manager-newest-cni-708733" [82bcfbf8-19ab-4fd7-856f-f7eb0d2e887b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:45:46.919251  391835 system_pods.go:61] "kube-proxy-nq79m" [7618e803-4e75-4661-ab8d-99195c316305] Running
	I1018 09:45:46.919257  391835 system_pods.go:61] "kube-scheduler-newest-cni-708733" [5d3ff5b3-f4aa-4f9f-a1ce-6bc323fa29dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:45:46.919263  391835 system_pods.go:61] "storage-provisioner" [930742e4-08ac-435f-8ae3-a6bbf9a76bcd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1018 09:45:46.919269  391835 system_pods.go:74] duration metric: took 3.799893ms to wait for pod list to return data ...
	I1018 09:45:46.919279  391835 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:45:46.921533  391835 default_sa.go:45] found service account: "default"
	I1018 09:45:46.921552  391835 default_sa.go:55] duration metric: took 2.267911ms for default service account to be created ...
	I1018 09:45:46.921563  391835 kubeadm.go:586] duration metric: took 3.721097004s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1018 09:45:46.921598  391835 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:45:46.923792  391835 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:45:46.923834  391835 node_conditions.go:123] node cpu capacity is 8
	I1018 09:45:46.923859  391835 node_conditions.go:105] duration metric: took 2.25193ms to run NodePressure ...
	I1018 09:45:46.923873  391835 start.go:241] waiting for startup goroutines ...
	I1018 09:45:46.923886  391835 start.go:246] waiting for cluster config update ...
	I1018 09:45:46.923900  391835 start.go:255] writing updated cluster config ...
	I1018 09:45:46.924119  391835 ssh_runner.go:195] Run: rm -f paused
	I1018 09:45:46.983400  391835 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:45:46.986152  391835 out.go:179] * Done! kubectl is now configured to use "newest-cni-708733" cluster and "default" namespace by default
	I1018 09:45:42.577400  391061 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1018 09:45:42.581993  391061 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1018 09:45:42.583021  391061 api_server.go:141] control plane version: v1.34.1
	I1018 09:45:42.583043  391061 api_server.go:131] duration metric: took 1.006747407s to wait for apiserver health ...
	I1018 09:45:42.583053  391061 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:45:42.586716  391061 system_pods.go:59] 8 kube-system pods found
	I1018 09:45:42.586760  391061 system_pods.go:61] "coredns-66bc5c9577-ksdf9" [ba2449a3-fc94-49e2-9e00-868003d349b1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:45:42.586776  391061 system_pods.go:61] "etcd-embed-certs-055175" [acbafcab-4332-412c-9a28-07d9c4f5d5d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:45:42.586799  391061 system_pods.go:61] "kindnet-tntfx" [f7f70a88-1903-43e5-a76f-2206c4e3df79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 09:45:42.586813  391061 system_pods.go:61] "kube-apiserver-embed-certs-055175" [106af05d-a1f9-4283-a87f-7ac003f31fc7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:45:42.586863  391061 system_pods.go:61] "kube-controller-manager-embed-certs-055175" [c2698b20-d1d6-4737-a292-7eef1978c79b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:45:42.586878  391061 system_pods.go:61] "kube-proxy-9n98q" [5c9c0f79-f699-4305-8423-c0863f443b78] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 09:45:42.586889  391061 system_pods.go:61] "kube-scheduler-embed-certs-055175" [7b537fc9-c879-44cc-95e6-35fb0dcc566a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:45:42.586899  391061 system_pods.go:61] "storage-provisioner" [1d121276-430c-41af-a2b6-542d426c43dc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:45:42.586912  391061 system_pods.go:74] duration metric: took 3.851478ms to wait for pod list to return data ...
	I1018 09:45:42.586926  391061 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:45:42.589741  391061 default_sa.go:45] found service account: "default"
	I1018 09:45:42.589766  391061 default_sa.go:55] duration metric: took 2.832506ms for default service account to be created ...
	I1018 09:45:42.589785  391061 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:45:42.593436  391061 system_pods.go:86] 8 kube-system pods found
	I1018 09:45:42.593470  391061 system_pods.go:89] "coredns-66bc5c9577-ksdf9" [ba2449a3-fc94-49e2-9e00-868003d349b1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:45:42.593482  391061 system_pods.go:89] "etcd-embed-certs-055175" [acbafcab-4332-412c-9a28-07d9c4f5d5d1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:45:42.593493  391061 system_pods.go:89] "kindnet-tntfx" [f7f70a88-1903-43e5-a76f-2206c4e3df79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1018 09:45:42.593501  391061 system_pods.go:89] "kube-apiserver-embed-certs-055175" [106af05d-a1f9-4283-a87f-7ac003f31fc7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:45:42.593516  391061 system_pods.go:89] "kube-controller-manager-embed-certs-055175" [c2698b20-d1d6-4737-a292-7eef1978c79b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:45:42.593528  391061 system_pods.go:89] "kube-proxy-9n98q" [5c9c0f79-f699-4305-8423-c0863f443b78] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1018 09:45:42.593539  391061 system_pods.go:89] "kube-scheduler-embed-certs-055175" [7b537fc9-c879-44cc-95e6-35fb0dcc566a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:45:42.593559  391061 system_pods.go:89] "storage-provisioner" [1d121276-430c-41af-a2b6-542d426c43dc] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:45:42.593571  391061 system_pods.go:126] duration metric: took 3.778642ms to wait for k8s-apps to be running ...
	I1018 09:45:42.593589  391061 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:45:42.593628  391061 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:45:42.607437  391061 system_svc.go:56] duration metric: took 13.83871ms WaitForService to wait for kubelet
	I1018 09:45:42.607463  391061 kubeadm.go:586] duration metric: took 3.274237526s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:45:42.607481  391061 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:45:42.610633  391061 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:45:42.610659  391061 node_conditions.go:123] node cpu capacity is 8
	I1018 09:45:42.610676  391061 node_conditions.go:105] duration metric: took 3.189324ms to run NodePressure ...
	I1018 09:45:42.610690  391061 start.go:241] waiting for startup goroutines ...
	I1018 09:45:42.610700  391061 start.go:246] waiting for cluster config update ...
	I1018 09:45:42.610711  391061 start.go:255] writing updated cluster config ...
	I1018 09:45:42.610989  391061 ssh_runner.go:195] Run: rm -f paused
	I1018 09:45:42.614869  391061 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:45:42.618204  391061 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ksdf9" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:45:44.625233  391061 pod_ready.go:104] pod "coredns-66bc5c9577-ksdf9" is not "Ready", error: <nil>
	W1018 09:45:46.625594  391061 pod_ready.go:104] pod "coredns-66bc5c9577-ksdf9" is not "Ready", error: <nil>
	I1018 09:45:43.888170  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:45:43.888770  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:45:43.888866  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:45:43.888962  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:45:43.924472  353123 cri.go:89] found id: "19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:45:43.924500  353123 cri.go:89] found id: ""
	I1018 09:45:43.924511  353123 logs.go:282] 1 containers: [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03]
	I1018 09:45:43.924573  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:43.929570  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:45:43.929636  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:45:43.965802  353123 cri.go:89] found id: ""
	I1018 09:45:43.965845  353123 logs.go:282] 0 containers: []
	W1018 09:45:43.965856  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:45:43.965864  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:45:43.965919  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:45:43.994915  353123 cri.go:89] found id: ""
	I1018 09:45:43.994951  353123 logs.go:282] 0 containers: []
	W1018 09:45:43.994966  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:45:43.994973  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:45:43.995035  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:45:44.024685  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:44.024712  353123 cri.go:89] found id: ""
	I1018 09:45:44.024724  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:45:44.024787  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:44.028840  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:45:44.028896  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:45:44.065751  353123 cri.go:89] found id: ""
	I1018 09:45:44.065782  353123 logs.go:282] 0 containers: []
	W1018 09:45:44.065793  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:45:44.065801  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:45:44.065914  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:45:44.106664  353123 cri.go:89] found id: "b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:45:44.106692  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:44.106698  353123 cri.go:89] found id: ""
	I1018 09:45:44.106714  353123 logs.go:282] 2 containers: [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:45:44.106775  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:44.114471  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:44.120400  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:45:44.120569  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:45:44.177487  353123 cri.go:89] found id: ""
	I1018 09:45:44.177515  353123 logs.go:282] 0 containers: []
	W1018 09:45:44.177660  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:45:44.177671  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:45:44.177853  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:45:44.218056  353123 cri.go:89] found id: ""
	I1018 09:45:44.218088  353123 logs.go:282] 0 containers: []
	W1018 09:45:44.218118  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:45:44.218140  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:44.218157  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:44.254064  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:45:44.254100  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:45:44.293814  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:44.293874  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:44.431097  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:45:44.431148  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:45:44.478946  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:44.478979  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:44.547989  353123 logs.go:123] Gathering logs for kube-controller-manager [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88] ...
	I1018 09:45:44.548022  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:45:44.586119  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:44.586155  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:45:44.659273  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:45:44.659309  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:45:44.683003  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:45:44.683044  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:45:44.773289  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:45:47.274933  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:45:47.275407  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:45:47.275469  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:45:47.275587  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:45:47.313368  353123 cri.go:89] found id: "19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:45:47.313396  353123 cri.go:89] found id: ""
	I1018 09:45:47.313407  353123 logs.go:282] 1 containers: [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03]
	I1018 09:45:47.313469  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:47.318875  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:45:47.318951  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:45:47.358305  353123 cri.go:89] found id: ""
	I1018 09:45:47.358331  353123 logs.go:282] 0 containers: []
	W1018 09:45:47.358340  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:45:47.358348  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:45:47.358411  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:45:47.394289  353123 cri.go:89] found id: ""
	I1018 09:45:47.394362  353123 logs.go:282] 0 containers: []
	W1018 09:45:47.394375  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:45:47.394383  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:45:47.394436  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:45:47.433806  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:47.433852  353123 cri.go:89] found id: ""
	I1018 09:45:47.433862  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:45:47.433917  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:47.438841  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:45:47.438906  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:45:47.467930  353123 cri.go:89] found id: ""
	I1018 09:45:47.467958  353123 logs.go:282] 0 containers: []
	W1018 09:45:47.467969  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:45:47.467976  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:45:47.468038  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:45:47.496921  353123 cri.go:89] found id: "b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:45:47.496943  353123 cri.go:89] found id: "813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:47.496947  353123 cri.go:89] found id: ""
	I1018 09:45:47.496956  353123 logs.go:282] 2 containers: [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05]
	I1018 09:45:47.497020  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:47.500969  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:45:47.504943  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:45:47.504993  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:45:47.532458  353123 cri.go:89] found id: ""
	I1018 09:45:47.532481  353123 logs.go:282] 0 containers: []
	W1018 09:45:47.532489  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:45:47.532494  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:45:47.532552  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:45:47.559397  353123 cri.go:89] found id: ""
	I1018 09:45:47.559424  353123 logs.go:282] 0 containers: []
	W1018 09:45:47.559434  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:45:47.559450  353123 logs.go:123] Gathering logs for kube-controller-manager [813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05] ...
	I1018 09:45:47.559465  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 813afb4235ed2b7af5e93ebee0f26624c2a87e639e7c4d6e11a8513c01ea5b05"
	I1018 09:45:47.588691  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:45:47.588721  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:45:47.623441  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:45:47.623468  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:45:47.661447  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:45:47.661474  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:45:47.725061  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:45:47.725096  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:45:47.791940  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:45:47.791979  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:45:47.887049  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:45:47.887085  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:45:47.905941  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:45:47.905973  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:45:47.975407  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:45:47.975430  353123 logs.go:123] Gathering logs for kube-controller-manager [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88] ...
	I1018 09:45:47.975447  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	W1018 09:45:49.124482  391061 pod_ready.go:104] pod "coredns-66bc5c9577-ksdf9" is not "Ready", error: <nil>
	W1018 09:45:51.124572  391061 pod_ready.go:104] pod "coredns-66bc5c9577-ksdf9" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.783734507Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.787494572Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ab067c82-770c-476c-b7aa-76d9efeba3b0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.788324487Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=87365546-3e8e-48a8-9274-d477451dc0bc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.78937543Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.790260749Z" level=info msg="Ran pod sandbox 28ca068a3930bcb084d1710f344c27bc07ffa2f0458e3d32cac2472a30cac03a with infra container: kube-system/kindnet-z7dcb/POD" id=ab067c82-770c-476c-b7aa-76d9efeba3b0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.79162408Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=16ee9371-19a4-4d89-bd1e-4c16473444ec name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.79269715Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.794339384Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a48cd9d4-bf6c-486a-9f62-7de142417dcc name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.794557897Z" level=info msg="Ran pod sandbox bd1b4699edf639c51131476b9d49a57cd427cab5edb06b5c2091461a0411260f with infra container: kube-system/kube-proxy-nq79m/POD" id=87365546-3e8e-48a8-9274-d477451dc0bc name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.796148126Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=3b78dce7-3ff6-4247-85fa-69a44328a796 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.796353076Z" level=info msg="Creating container: kube-system/kindnet-z7dcb/kindnet-cni" id=04437c9d-f8bb-4d21-aabd-aa6e738f5d06 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.797297417Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=113bd0e1-8006-4183-8a84-3ff0f7442f9d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.797626337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.798688654Z" level=info msg="Creating container: kube-system/kube-proxy-nq79m/kube-proxy" id=561d4e0f-c0d1-4227-920e-16cc907e869b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.80139987Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.802525791Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.803197133Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.810702337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.811310016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.834476212Z" level=info msg="Created container 204965dc89584beb2735f7dcd8dd4fad9d6cd7d7794b97b5aebd873dae276d20: kube-system/kindnet-z7dcb/kindnet-cni" id=04437c9d-f8bb-4d21-aabd-aa6e738f5d06 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.836177134Z" level=info msg="Starting container: 204965dc89584beb2735f7dcd8dd4fad9d6cd7d7794b97b5aebd873dae276d20" id=431a92ba-383b-49d3-89be-0a2e4a4c16a6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.838793535Z" level=info msg="Started container" PID=1030 containerID=204965dc89584beb2735f7dcd8dd4fad9d6cd7d7794b97b5aebd873dae276d20 description=kube-system/kindnet-z7dcb/kindnet-cni id=431a92ba-383b-49d3-89be-0a2e4a4c16a6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=28ca068a3930bcb084d1710f344c27bc07ffa2f0458e3d32cac2472a30cac03a
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.84212974Z" level=info msg="Created container ed56304dada2b69f3a06fd413ad71a4d41081d77b43ef79dbd43751753447b68: kube-system/kube-proxy-nq79m/kube-proxy" id=561d4e0f-c0d1-4227-920e-16cc907e869b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.843478048Z" level=info msg="Starting container: ed56304dada2b69f3a06fd413ad71a4d41081d77b43ef79dbd43751753447b68" id=8a32dd68-5a14-4156-b104-eecea5f3283a name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:45:45 newest-cni-708733 crio[517]: time="2025-10-18T09:45:45.848688716Z" level=info msg="Started container" PID=1031 containerID=ed56304dada2b69f3a06fd413ad71a4d41081d77b43ef79dbd43751753447b68 description=kube-system/kube-proxy-nq79m/kube-proxy id=8a32dd68-5a14-4156-b104-eecea5f3283a name=/runtime.v1.RuntimeService/StartContainer sandboxID=bd1b4699edf639c51131476b9d49a57cd427cab5edb06b5c2091461a0411260f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	ed56304dada2b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   7 seconds ago       Running             kube-proxy                1                   bd1b4699edf63       kube-proxy-nq79m                            kube-system
	204965dc89584       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   7 seconds ago       Running             kindnet-cni               1                   28ca068a3930b       kindnet-z7dcb                               kube-system
	082f88526b1ee       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   2b606006c91ff       kube-scheduler-newest-cni-708733            kube-system
	ff767733a8265       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   f5aaa6430420a       kube-apiserver-newest-cni-708733            kube-system
	db7341eafc41c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   431f8bf25796e       etcd-newest-cni-708733                      kube-system
	4d31b12a89bd9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   311e1b37fb613       kube-controller-manager-newest-cni-708733   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-708733
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-708733
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=newest-cni-708733
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_45_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:45:11 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-708733
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:45:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:45:45 +0000   Sat, 18 Oct 2025 09:45:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:45:45 +0000   Sat, 18 Oct 2025 09:45:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:45:45 +0000   Sat, 18 Oct 2025 09:45:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 18 Oct 2025 09:45:45 +0000   Sat, 18 Oct 2025 09:45:09 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-708733
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                b382c5a4-fd22-47f3-b8a6-fb04181833ca
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-708733                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-z7dcb                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-newest-cni-708733             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-newest-cni-708733    200m (2%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-nq79m                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-newest-cni-708733             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 33s   kube-proxy       
	  Normal  Starting                 6s    kube-proxy       
	  Normal  Starting                 39s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s   kubelet          Node newest-cni-708733 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s   kubelet          Node newest-cni-708733 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s   kubelet          Node newest-cni-708733 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           34s   node-controller  Node newest-cni-708733 event: Registered Node newest-cni-708733 in Controller
	  Normal  RegisteredNode           4s    node-controller  Node newest-cni-708733 event: Registered Node newest-cni-708733 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [db7341eafc41ce4a3c0819db1d63993ca69b231784785233e8f6cde3e77357be] <==
	{"level":"warn","ts":"2025-10-18T09:45:44.467619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.484669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.499772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.502771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.510071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.517777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.524623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.530599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.538629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.545604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.552274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.560054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.567753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.576209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.584396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.592864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.600208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.608813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.616698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.625335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.632531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.648161Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.656010Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.663936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:44.735964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40190","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:45:53 up  1:28,  0 user,  load average: 2.46, 2.76, 1.87
	Linux newest-cni-708733 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [204965dc89584beb2735f7dcd8dd4fad9d6cd7d7794b97b5aebd873dae276d20] <==
	I1018 09:45:46.094214       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:45:46.094514       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1018 09:45:46.094631       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:45:46.094650       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:45:46.094668       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:45:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:45:46.297805       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:45:46.297906       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:45:46.297940       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:45:46.298069       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:45:46.690653       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:45:46.690711       1 metrics.go:72] Registering metrics
	I1018 09:45:46.690795       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [ff767733a82659e64d11546f810903a05069bae143f380f9bd6fbe22ce1533d9] <==
	I1018 09:45:45.401176       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1018 09:45:45.405396       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:45:45.405411       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:45:45.405417       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:45:45.405424       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:45:45.401164       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:45:45.401197       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 09:45:45.405879       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:45:45.409985       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 09:45:45.414937       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:45:45.434108       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:45:45.436028       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:45:45.441039       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:45:45.652354       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:45:45.854551       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:45:45.895674       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:45:45.925756       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:45:45.935555       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:45:46.006136       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.150.235"}
	I1018 09:45:46.026767       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.197.188"}
	I1018 09:45:46.308187       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:45:48.732461       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:45:49.033408       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:45:49.033408       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:45:49.183744       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4d31b12a89bd95e764894ba0f4011abf85b4b249605f6abee6c147e7b546795d] <==
	I1018 09:45:48.726660       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1018 09:45:48.729021       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1018 09:45:48.729111       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1018 09:45:48.729123       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:45:48.729420       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1018 09:45:48.729521       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1018 09:45:48.729532       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1018 09:45:48.730330       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:45:48.731510       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:45:48.731600       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:45:48.732784       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1018 09:45:48.733654       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:45:48.734372       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1018 09:45:48.735780       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:45:48.737222       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:45:48.737324       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1018 09:45:48.737771       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1018 09:45:48.739077       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:45:48.741307       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:45:48.741405       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1018 09:45:48.744600       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:45:48.772924       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:45:48.783929       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:45:48.783950       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:45:48.783960       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [ed56304dada2b69f3a06fd413ad71a4d41081d77b43ef79dbd43751753447b68] <==
	I1018 09:45:45.898411       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:45:45.975123       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:45:46.077039       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:45:46.077081       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1018 09:45:46.077188       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:45:46.106221       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:45:46.106379       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:45:46.113640       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:45:46.114237       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:45:46.114396       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:45:46.117272       1 config.go:200] "Starting service config controller"
	I1018 09:45:46.117379       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:45:46.117569       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:45:46.117612       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:45:46.117679       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:45:46.118517       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:45:46.117965       1 config.go:309] "Starting node config controller"
	I1018 09:45:46.118588       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:45:46.118601       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:45:46.218270       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:45:46.218293       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:45:46.219479       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [082f88526b1eea8a63828944e82308647c71102eb90a2a5ed82d5d2512799fce] <==
	I1018 09:45:43.800477       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:45:45.318595       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:45:45.318641       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:45:45.318653       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:45:45.318663       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:45:45.355676       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:45:45.355711       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:45:45.359577       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:45:45.359615       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:45:45.360027       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:45:45.360163       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:45:45.462310       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:45:44 newest-cni-708733 kubelet[660]: E1018 09:45:44.515171     660 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-708733\" not found" node="newest-cni-708733"
	Oct 18 09:45:44 newest-cni-708733 kubelet[660]: E1018 09:45:44.515274     660 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-708733\" not found" node="newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.372894     660 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: E1018 09:45:45.397242     660 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-708733\" already exists" pod="kube-system/kube-scheduler-newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.397403     660 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: E1018 09:45:45.418149     660 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-708733\" already exists" pod="kube-system/etcd-newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.418370     660 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: E1018 09:45:45.431339     660 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-708733\" already exists" pod="kube-system/kube-apiserver-newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.431421     660 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: E1018 09:45:45.460864     660 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-708733\" already exists" pod="kube-system/kube-controller-manager-newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.461310     660 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.461418     660 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-708733"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.461463     660 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.463602     660 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.467740     660 apiserver.go:52] "Watching apiserver"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.571670     660 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.646251     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7618e803-4e75-4661-ab8d-99195c316305-lib-modules\") pod \"kube-proxy-nq79m\" (UID: \"7618e803-4e75-4661-ab8d-99195c316305\") " pod="kube-system/kube-proxy-nq79m"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.646314     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7618e803-4e75-4661-ab8d-99195c316305-xtables-lock\") pod \"kube-proxy-nq79m\" (UID: \"7618e803-4e75-4661-ab8d-99195c316305\") " pod="kube-system/kube-proxy-nq79m"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.646407     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77bfd17c-f58c-418b-8e31-c2893c4a3647-lib-modules\") pod \"kindnet-z7dcb\" (UID: \"77bfd17c-f58c-418b-8e31-c2893c4a3647\") " pod="kube-system/kindnet-z7dcb"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.646443     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77bfd17c-f58c-418b-8e31-c2893c4a3647-xtables-lock\") pod \"kindnet-z7dcb\" (UID: \"77bfd17c-f58c-418b-8e31-c2893c4a3647\") " pod="kube-system/kindnet-z7dcb"
	Oct 18 09:45:45 newest-cni-708733 kubelet[660]: I1018 09:45:45.646468     660 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/77bfd17c-f58c-418b-8e31-c2893c4a3647-cni-cfg\") pod \"kindnet-z7dcb\" (UID: \"77bfd17c-f58c-418b-8e31-c2893c4a3647\") " pod="kube-system/kindnet-z7dcb"
	Oct 18 09:45:48 newest-cni-708733 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:45:48 newest-cni-708733 kubelet[660]: I1018 09:45:48.017565     660 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 18 09:45:48 newest-cni-708733 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:45:48 newest-cni-708733 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-708733 -n newest-cni-708733
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-708733 -n newest-cni-708733: exit status 2 (306.050792ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-708733 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-pcqqp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-c8n2g kubernetes-dashboard-855c9754f9-5bx7r
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-708733 describe pod coredns-66bc5c9577-pcqqp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-c8n2g kubernetes-dashboard-855c9754f9-5bx7r
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-708733 describe pod coredns-66bc5c9577-pcqqp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-c8n2g kubernetes-dashboard-855c9754f9-5bx7r: exit status 1 (65.226559ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-pcqqp" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-c8n2g" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-5bx7r" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-708733 describe pod coredns-66bc5c9577-pcqqp storage-provisioner dashboard-metrics-scraper-6ffb444bf9-c8n2g kubernetes-dashboard-855c9754f9-5bx7r: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-055175 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-055175 --alsologtostderr -v=1: exit status 80 (1.785365053s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-055175 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:46:34.825515  406849 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:46:34.826038  406849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:46:34.826053  406849 out.go:374] Setting ErrFile to fd 2...
	I1018 09:46:34.826059  406849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:46:34.826354  406849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:46:34.827045  406849 out.go:368] Setting JSON to false
	I1018 09:46:34.827106  406849 mustload.go:65] Loading cluster: embed-certs-055175
	I1018 09:46:34.827598  406849 config.go:182] Loaded profile config "embed-certs-055175": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:46:34.828258  406849 cli_runner.go:164] Run: docker container inspect embed-certs-055175 --format={{.State.Status}}
	I1018 09:46:34.857098  406849 host.go:66] Checking if "embed-certs-055175" exists ...
	I1018 09:46:34.857480  406849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:46:34.922066  406849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-18 09:46:34.911261219 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:46:34.922965  406849 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-055175 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:46:34.924340  406849 out.go:179] * Pausing node embed-certs-055175 ... 
	I1018 09:46:34.925288  406849 host.go:66] Checking if "embed-certs-055175" exists ...
	I1018 09:46:34.925572  406849 ssh_runner.go:195] Run: systemctl --version
	I1018 09:46:34.925610  406849 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-055175
	I1018 09:46:34.945996  406849 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/embed-certs-055175/id_rsa Username:docker}
	I1018 09:46:35.047910  406849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:46:35.070495  406849 pause.go:52] kubelet running: true
	I1018 09:46:35.070603  406849 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:46:35.279077  406849 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:46:35.279192  406849 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:46:35.356529  406849 cri.go:89] found id: "eb19b9973edd2e5885c0817788667f192b9bccac8eb33da85d7ba2b3696335af"
	I1018 09:46:35.356582  406849 cri.go:89] found id: "7532c8b9596c037e46d007cefb401457054eec5cbee4a52ea325b5f3828bb3f9"
	I1018 09:46:35.356589  406849 cri.go:89] found id: "fca12026bf0b1ba5900afb94e683550d1e47af8a207f77fcb266172b3322547a"
	I1018 09:46:35.356593  406849 cri.go:89] found id: "cfb7f8b3c954a45f3592ad652b5d22760faec41160d64c4ca5ea64b499628f20"
	I1018 09:46:35.356598  406849 cri.go:89] found id: "18b9b557a1a00e9e1345cbaf906acf8f76759deaa3ffbb5c5956d703f09a134d"
	I1018 09:46:35.356602  406849 cri.go:89] found id: "82544c7a6f005e8210b717c0d6b26b29c0f08fbb6fdae3002ce9107b8c924a75"
	I1018 09:46:35.356606  406849 cri.go:89] found id: "d8a28c141ac160edf272ee9ddf9b12c1548c335130b27e90935ec06e6a60642f"
	I1018 09:46:35.356610  406849 cri.go:89] found id: "f427b03ed9ce5db5e74d4069d94ff8948e7da816ad588e3c12fbdd22293cab0d"
	I1018 09:46:35.356614  406849 cri.go:89] found id: "0aa95bb2edd15a27792b1a5bd689c87bd9d233036c37d823aff945a5fee1dc2d"
	I1018 09:46:35.356642  406849 cri.go:89] found id: "d47777b68d3c4c1e6a384ff338c901429116f30e9a77d8534cdbdade36f15a3c"
	I1018 09:46:35.356651  406849 cri.go:89] found id: "492f754e3064cb75bcfcd048c637bf1d922ca2b1f7c946df701660dacb55b5b6"
	I1018 09:46:35.356656  406849 cri.go:89] found id: ""
	I1018 09:46:35.356701  406849 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:46:35.368437  406849 retry.go:31] will retry after 153.567354ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:46:35Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:46:35.522755  406849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:46:35.535950  406849 pause.go:52] kubelet running: false
	I1018 09:46:35.536011  406849 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:46:35.681621  406849 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:46:35.681725  406849 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:46:35.766919  406849 cri.go:89] found id: "eb19b9973edd2e5885c0817788667f192b9bccac8eb33da85d7ba2b3696335af"
	I1018 09:46:35.766964  406849 cri.go:89] found id: "7532c8b9596c037e46d007cefb401457054eec5cbee4a52ea325b5f3828bb3f9"
	I1018 09:46:35.766971  406849 cri.go:89] found id: "fca12026bf0b1ba5900afb94e683550d1e47af8a207f77fcb266172b3322547a"
	I1018 09:46:35.766976  406849 cri.go:89] found id: "cfb7f8b3c954a45f3592ad652b5d22760faec41160d64c4ca5ea64b499628f20"
	I1018 09:46:35.766980  406849 cri.go:89] found id: "18b9b557a1a00e9e1345cbaf906acf8f76759deaa3ffbb5c5956d703f09a134d"
	I1018 09:46:35.766985  406849 cri.go:89] found id: "82544c7a6f005e8210b717c0d6b26b29c0f08fbb6fdae3002ce9107b8c924a75"
	I1018 09:46:35.766990  406849 cri.go:89] found id: "d8a28c141ac160edf272ee9ddf9b12c1548c335130b27e90935ec06e6a60642f"
	I1018 09:46:35.766993  406849 cri.go:89] found id: "f427b03ed9ce5db5e74d4069d94ff8948e7da816ad588e3c12fbdd22293cab0d"
	I1018 09:46:35.766998  406849 cri.go:89] found id: "0aa95bb2edd15a27792b1a5bd689c87bd9d233036c37d823aff945a5fee1dc2d"
	I1018 09:46:35.767011  406849 cri.go:89] found id: "d47777b68d3c4c1e6a384ff338c901429116f30e9a77d8534cdbdade36f15a3c"
	I1018 09:46:35.767018  406849 cri.go:89] found id: "492f754e3064cb75bcfcd048c637bf1d922ca2b1f7c946df701660dacb55b5b6"
	I1018 09:46:35.767021  406849 cri.go:89] found id: ""
	I1018 09:46:35.767071  406849 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:46:35.781392  406849 retry.go:31] will retry after 510.662985ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:46:35Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:46:36.293173  406849 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:46:36.306138  406849 pause.go:52] kubelet running: false
	I1018 09:46:36.306195  406849 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:46:36.460712  406849 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:46:36.460803  406849 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:46:36.530789  406849 cri.go:89] found id: "eb19b9973edd2e5885c0817788667f192b9bccac8eb33da85d7ba2b3696335af"
	I1018 09:46:36.530810  406849 cri.go:89] found id: "7532c8b9596c037e46d007cefb401457054eec5cbee4a52ea325b5f3828bb3f9"
	I1018 09:46:36.530814  406849 cri.go:89] found id: "fca12026bf0b1ba5900afb94e683550d1e47af8a207f77fcb266172b3322547a"
	I1018 09:46:36.530817  406849 cri.go:89] found id: "cfb7f8b3c954a45f3592ad652b5d22760faec41160d64c4ca5ea64b499628f20"
	I1018 09:46:36.530841  406849 cri.go:89] found id: "18b9b557a1a00e9e1345cbaf906acf8f76759deaa3ffbb5c5956d703f09a134d"
	I1018 09:46:36.530855  406849 cri.go:89] found id: "82544c7a6f005e8210b717c0d6b26b29c0f08fbb6fdae3002ce9107b8c924a75"
	I1018 09:46:36.530860  406849 cri.go:89] found id: "d8a28c141ac160edf272ee9ddf9b12c1548c335130b27e90935ec06e6a60642f"
	I1018 09:46:36.530865  406849 cri.go:89] found id: "f427b03ed9ce5db5e74d4069d94ff8948e7da816ad588e3c12fbdd22293cab0d"
	I1018 09:46:36.530869  406849 cri.go:89] found id: "0aa95bb2edd15a27792b1a5bd689c87bd9d233036c37d823aff945a5fee1dc2d"
	I1018 09:46:36.530884  406849 cri.go:89] found id: "d47777b68d3c4c1e6a384ff338c901429116f30e9a77d8534cdbdade36f15a3c"
	I1018 09:46:36.530892  406849 cri.go:89] found id: "492f754e3064cb75bcfcd048c637bf1d922ca2b1f7c946df701660dacb55b5b6"
	I1018 09:46:36.530896  406849 cri.go:89] found id: ""
	I1018 09:46:36.530938  406849 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:46:36.544904  406849 out.go:203] 
	W1018 09:46:36.546008  406849 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:46:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:46:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:46:36.546028  406849 out.go:285] * 
	* 
	W1018 09:46:36.550071  406849 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:46:36.551222  406849 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-055175 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-055175
helpers_test.go:243: (dbg) docker inspect embed-certs-055175:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a",
	        "Created": "2025-10-18T09:44:28.71602918Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 391257,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:45:32.513143488Z",
	            "FinishedAt": "2025-10-18T09:45:31.679063648Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a/hosts",
	        "LogPath": "/var/lib/docker/containers/7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a/7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a-json.log",
	        "Name": "/embed-certs-055175",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-055175:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-055175",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a",
	                "LowerDir": "/var/lib/docker/overlay2/531bfa693b92ead6b3f8f81dfe6ceee18d64c33ff7b1620bdbea0221492660a0-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/531bfa693b92ead6b3f8f81dfe6ceee18d64c33ff7b1620bdbea0221492660a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/531bfa693b92ead6b3f8f81dfe6ceee18d64c33ff7b1620bdbea0221492660a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/531bfa693b92ead6b3f8f81dfe6ceee18d64c33ff7b1620bdbea0221492660a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-055175",
	                "Source": "/var/lib/docker/volumes/embed-certs-055175/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-055175",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-055175",
	                "name.minikube.sigs.k8s.io": "embed-certs-055175",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "225858e59a759609a218e9917712deaa3f1f149ba559732d2116cd45995f2ca0",
	            "SandboxKey": "/var/run/docker/netns/225858e59a75",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33217"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33218"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33221"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33219"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33220"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-055175": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:e1:38:d9:39:dd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7d2dbeb8dc9f32aa321be9871888fc0b62950b6ca92410878ff116152ea346c2",
	                    "EndpointID": "667a1602f489d243c572ea5e9a80c150cc6dd31df68372768c41606b769130c5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-055175",
	                        "7ab18617f15c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-055175 -n embed-certs-055175
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-055175 -n embed-certs-055175: exit status 2 (345.394475ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-055175 logs -n 25
I1018 09:46:37.022637  134611 config.go:182] Loaded profile config "auto-345705": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-055175 logs -n 25: (1.551463857s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-expiration-650496                                                                                                                                                                                                                     │ cert-expiration-650496       │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p no-preload-589869                                                                                                                                                                                                                          │ no-preload-589869            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ delete  │ -p disable-driver-mounts-399936                                                                                                                                                                                                               │ disable-driver-mounts-399936 │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ start   │ -p default-k8s-diff-port-942905 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p newest-cni-708733 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable metrics-server -p embed-certs-055175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ stop    │ -p embed-certs-055175 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable metrics-server -p newest-cni-708733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ stop    │ -p newest-cni-708733 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable dashboard -p embed-certs-055175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p embed-certs-055175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:46 UTC │
	│ addons  │ enable dashboard -p newest-cni-708733 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p newest-cni-708733 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-942905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-942905 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:46 UTC │
	│ image   │ newest-cni-708733 image list --format=json                                                                                                                                                                                                    │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ pause   │ -p newest-cni-708733 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ delete  │ -p newest-cni-708733                                                                                                                                                                                                                          │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ delete  │ -p newest-cni-708733                                                                                                                                                                                                                          │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p auto-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:46 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-942905 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ start   │ -p default-k8s-diff-port-942905 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	│ image   │ embed-certs-055175 image list --format=json                                                                                                                                                                                                   │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ pause   │ -p embed-certs-055175 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	│ ssh     │ -p auto-345705 pgrep -a kubelet                                                                                                                                                                                                               │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:46:02
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:46:02.534378  400675 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:46:02.534508  400675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:46:02.534520  400675 out.go:374] Setting ErrFile to fd 2...
	I1018 09:46:02.534526  400675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:46:02.535300  400675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:46:02.535898  400675 out.go:368] Setting JSON to false
	I1018 09:46:02.537000  400675 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5307,"bootTime":1760775456,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:46:02.537059  400675 start.go:141] virtualization: kvm guest
	I1018 09:46:02.542959  400675 out.go:179] * [default-k8s-diff-port-942905] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:46:02.544301  400675 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:46:02.544364  400675 notify.go:220] Checking for updates...
	I1018 09:46:02.546886  400675 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:46:02.548145  400675 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:46:02.552330  400675 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:46:02.553631  400675 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:46:02.554739  400675 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:46:02.556460  400675 config.go:182] Loaded profile config "default-k8s-diff-port-942905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:46:02.557171  400675 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:46:02.588909  400675 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:46:02.589004  400675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:46:02.658442  400675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 09:46:02.64445347 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:46:02.658602  400675 docker.go:318] overlay module found
	I1018 09:46:02.660353  400675 out.go:179] * Using the docker driver based on existing profile
	I1018 09:46:02.661490  400675 start.go:305] selected driver: docker
	I1018 09:46:02.661509  400675 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-942905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-942905 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:46:02.661650  400675 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:46:02.662443  400675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:46:02.733195  400675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 09:46:02.720619393 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:46:02.733587  400675 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:46:02.733616  400675 cni.go:84] Creating CNI manager for ""
	I1018 09:46:02.733683  400675 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:46:02.733742  400675 start.go:349] cluster config:
	{Name:default-k8s-diff-port-942905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-942905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:46:02.737163  400675 out.go:179] * Starting "default-k8s-diff-port-942905" primary control-plane node in "default-k8s-diff-port-942905" cluster
	I1018 09:46:02.738797  400675 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:46:02.740142  400675 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:46:02.744233  400675 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:46:02.744943  400675 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:46:02.744389  400675 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:46:02.744963  400675 cache.go:58] Caching tarball of preloaded images
	I1018 09:46:02.745176  400675 preload.go:233] Found /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:46:02.745190  400675 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:46:02.745325  400675 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/config.json ...
	I1018 09:46:02.777226  400675 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:46:02.777252  400675 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:46:02.777269  400675 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:46:02.777337  400675 start.go:360] acquireMachinesLock for default-k8s-diff-port-942905: {Name:mk8b7fe5fa5304418be28440581999707ea8535f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:46:02.777403  400675 start.go:364] duration metric: took 38.159µs to acquireMachinesLock for "default-k8s-diff-port-942905"
	I1018 09:46:02.777421  400675 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:46:02.777427  400675 fix.go:54] fixHost starting: 
	I1018 09:46:02.777641  400675 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Status}}
	I1018 09:46:02.798466  400675 fix.go:112] recreateIfNeeded on default-k8s-diff-port-942905: state=Stopped err=<nil>
	W1018 09:46:02.798495  400675 fix.go:138] unexpected machine state, will restart: <nil>
	I1018 09:46:00.084891  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:46:00.085377  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:46:00.085432  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:46:00.085485  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:46:00.112872  353123 cri.go:89] found id: "19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:00.112894  353123 cri.go:89] found id: ""
	I1018 09:46:00.112903  353123 logs.go:282] 1 containers: [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03]
	I1018 09:46:00.112970  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:00.116946  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:46:00.117014  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:46:00.144746  353123 cri.go:89] found id: ""
	I1018 09:46:00.144772  353123 logs.go:282] 0 containers: []
	W1018 09:46:00.144780  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:46:00.144785  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:46:00.144865  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:46:00.173321  353123 cri.go:89] found id: ""
	I1018 09:46:00.173348  353123 logs.go:282] 0 containers: []
	W1018 09:46:00.173360  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:46:00.173369  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:46:00.173424  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:46:00.201072  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:00.201098  353123 cri.go:89] found id: ""
	I1018 09:46:00.201109  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:46:00.201185  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:00.205056  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:46:00.205127  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:46:00.232667  353123 cri.go:89] found id: ""
	I1018 09:46:00.232698  353123 logs.go:282] 0 containers: []
	W1018 09:46:00.232708  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:46:00.232715  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:46:00.232781  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:46:00.260454  353123 cri.go:89] found id: "b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:00.260480  353123 cri.go:89] found id: ""
	I1018 09:46:00.260490  353123 logs.go:282] 1 containers: [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88]
	I1018 09:46:00.260561  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:00.264578  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:46:00.264640  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:46:00.292655  353123 cri.go:89] found id: ""
	I1018 09:46:00.292682  353123 logs.go:282] 0 containers: []
	W1018 09:46:00.292694  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:46:00.292702  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:46:00.292756  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:46:00.319981  353123 cri.go:89] found id: ""
	I1018 09:46:00.320012  353123 logs.go:282] 0 containers: []
	W1018 09:46:00.320022  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:46:00.320032  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:46:00.320048  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:00.373535  353123 logs.go:123] Gathering logs for kube-controller-manager [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88] ...
	I1018 09:46:00.373574  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:00.401766  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:46:00.401796  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:46:00.458859  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:46:00.458903  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:46:00.490029  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:46:00.490060  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:46:00.581993  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:46:00.582040  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:46:00.601327  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:46:00.601366  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:46:00.662112  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:46:00.662140  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:46:00.662159  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:03.196932  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:46:03.197353  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:46:03.197414  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:46:03.197477  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:46:03.230105  353123 cri.go:89] found id: "19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:03.230127  353123 cri.go:89] found id: ""
	I1018 09:46:03.230137  353123 logs.go:282] 1 containers: [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03]
	I1018 09:46:03.230198  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:03.234041  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:46:03.234113  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:46:03.261514  353123 cri.go:89] found id: ""
	I1018 09:46:03.261592  353123 logs.go:282] 0 containers: []
	W1018 09:46:03.261604  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:46:03.261612  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:46:03.261670  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:46:03.297480  353123 cri.go:89] found id: ""
	I1018 09:46:03.297513  353123 logs.go:282] 0 containers: []
	W1018 09:46:03.297526  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:46:03.297534  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:46:03.297603  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:46:03.332226  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:03.332256  353123 cri.go:89] found id: ""
	I1018 09:46:03.332266  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:46:03.332330  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:03.337622  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:46:03.337717  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:46:03.375178  353123 cri.go:89] found id: ""
	I1018 09:46:03.375212  353123 logs.go:282] 0 containers: []
	W1018 09:46:03.375223  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:46:03.375230  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:46:03.375294  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:46:03.406955  353123 cri.go:89] found id: "b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:03.406979  353123 cri.go:89] found id: ""
	I1018 09:46:03.406989  353123 logs.go:282] 1 containers: [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88]
	I1018 09:46:03.407052  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:03.411739  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:46:03.411807  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:46:03.439935  353123 cri.go:89] found id: ""
	I1018 09:46:03.439974  353123 logs.go:282] 0 containers: []
	W1018 09:46:03.439985  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:46:03.439993  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:46:03.440044  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:46:03.468529  353123 cri.go:89] found id: ""
	I1018 09:46:03.468556  353123 logs.go:282] 0 containers: []
	W1018 09:46:03.468564  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:46:03.468574  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:46:03.468589  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:46:03.498229  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:46:03.498257  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:46:03.591888  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:46:03.591924  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:46:03.610716  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:46:03.610747  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1018 09:46:02.030477  399440 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-345705:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.362697852s)
	I1018 09:46:02.030520  399440 kic.go:203] duration metric: took 4.362867401s to extract preloaded images to volume ...
	W1018 09:46:02.030645  399440 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1018 09:46:02.030690  399440 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1018 09:46:02.030738  399440 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1018 09:46:02.091171  399440 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-345705 --name auto-345705 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-345705 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-345705 --network auto-345705 --ip 192.168.103.2 --volume auto-345705:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1018 09:46:02.422143  399440 cli_runner.go:164] Run: docker container inspect auto-345705 --format={{.State.Running}}
	I1018 09:46:02.441695  399440 cli_runner.go:164] Run: docker container inspect auto-345705 --format={{.State.Status}}
	I1018 09:46:02.462438  399440 cli_runner.go:164] Run: docker exec auto-345705 stat /var/lib/dpkg/alternatives/iptables
	I1018 09:46:02.511614  399440 oci.go:144] the created container "auto-345705" has a running status.
	I1018 09:46:02.511641  399440 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/auto-345705/id_rsa...
	I1018 09:46:03.725677  399440 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21764-131066/.minikube/machines/auto-345705/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1018 09:46:03.752746  399440 cli_runner.go:164] Run: docker container inspect auto-345705 --format={{.State.Status}}
	I1018 09:46:03.770943  399440 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1018 09:46:03.770975  399440 kic_runner.go:114] Args: [docker exec --privileged auto-345705 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1018 09:46:03.817184  399440 cli_runner.go:164] Run: docker container inspect auto-345705 --format={{.State.Status}}
	I1018 09:46:03.835997  399440 machine.go:93] provisionDockerMachine start ...
	I1018 09:46:03.836105  399440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-345705
	I1018 09:46:03.853699  399440 main.go:141] libmachine: Using SSH client type: native
	I1018 09:46:03.853998  399440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33228 <nil> <nil>}
	I1018 09:46:03.854016  399440 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:46:03.984863  399440 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-345705
	
	I1018 09:46:03.984894  399440 ubuntu.go:182] provisioning hostname "auto-345705"
	I1018 09:46:03.984963  399440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-345705
	I1018 09:46:04.002461  399440 main.go:141] libmachine: Using SSH client type: native
	I1018 09:46:04.002707  399440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33228 <nil> <nil>}
	I1018 09:46:04.002725  399440 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-345705 && echo "auto-345705" | sudo tee /etc/hostname
	I1018 09:46:04.145890  399440 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-345705
	
	I1018 09:46:04.145979  399440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-345705
	I1018 09:46:04.163701  399440 main.go:141] libmachine: Using SSH client type: native
	I1018 09:46:04.163980  399440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33228 <nil> <nil>}
	I1018 09:46:04.164007  399440 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-345705' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-345705/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-345705' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:46:04.298659  399440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:46:04.298693  399440 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:46:04.298743  399440 ubuntu.go:190] setting up certificates
	I1018 09:46:04.298758  399440 provision.go:84] configureAuth start
	I1018 09:46:04.298818  399440 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-345705
	I1018 09:46:04.316466  399440 provision.go:143] copyHostCerts
	I1018 09:46:04.316564  399440 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:46:04.316581  399440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:46:04.316657  399440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:46:04.316772  399440 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:46:04.316784  399440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:46:04.316847  399440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:46:04.316936  399440 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:46:04.316946  399440 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:46:04.316983  399440 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:46:04.317071  399440 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.auto-345705 san=[127.0.0.1 192.168.103.2 auto-345705 localhost minikube]
	I1018 09:46:04.518961  399440 provision.go:177] copyRemoteCerts
	I1018 09:46:04.519040  399440 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:46:04.519094  399440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-345705
	I1018 09:46:04.536517  399440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33228 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/auto-345705/id_rsa Username:docker}
	I1018 09:46:04.633174  399440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:46:04.653687  399440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1018 09:46:04.672552  399440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:46:04.691207  399440 provision.go:87] duration metric: took 392.430839ms to configureAuth
	I1018 09:46:04.691235  399440 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:46:04.691426  399440 config.go:182] Loaded profile config "auto-345705": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:46:04.691553  399440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-345705
	I1018 09:46:04.709226  399440 main.go:141] libmachine: Using SSH client type: native
	I1018 09:46:04.709453  399440 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33228 <nil> <nil>}
	I1018 09:46:04.709475  399440 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:46:04.953462  399440 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:46:04.953492  399440 machine.go:96] duration metric: took 1.117464834s to provisionDockerMachine
	I1018 09:46:04.953507  399440 client.go:171] duration metric: took 7.826127825s to LocalClient.Create
	I1018 09:46:04.953532  399440 start.go:167] duration metric: took 7.826223024s to libmachine.API.Create "auto-345705"
	I1018 09:46:04.953547  399440 start.go:293] postStartSetup for "auto-345705" (driver="docker")
	I1018 09:46:04.953566  399440 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:46:04.953642  399440 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:46:04.953704  399440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-345705
	I1018 09:46:04.971353  399440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33228 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/auto-345705/id_rsa Username:docker}
	I1018 09:46:05.069090  399440 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:46:05.072771  399440 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:46:05.072806  399440 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:46:05.072819  399440 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:46:05.072890  399440 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:46:05.072998  399440 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:46:05.073119  399440 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:46:05.080595  399440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:46:05.100113  399440 start.go:296] duration metric: took 146.54464ms for postStartSetup
	I1018 09:46:05.100519  399440 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-345705
	I1018 09:46:05.117728  399440 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/config.json ...
	I1018 09:46:05.118019  399440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:46:05.118066  399440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-345705
	I1018 09:46:05.136063  399440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33228 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/auto-345705/id_rsa Username:docker}
	I1018 09:46:05.230772  399440 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:46:05.235250  399440 start.go:128] duration metric: took 8.109949033s to createHost
	I1018 09:46:05.235270  399440 start.go:83] releasing machines lock for "auto-345705", held for 8.11016895s
	I1018 09:46:05.235324  399440 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-345705
	I1018 09:46:05.252699  399440 ssh_runner.go:195] Run: cat /version.json
	I1018 09:46:05.252718  399440 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:46:05.252757  399440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-345705
	I1018 09:46:05.252773  399440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-345705
	I1018 09:46:05.271590  399440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33228 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/auto-345705/id_rsa Username:docker}
	I1018 09:46:05.271914  399440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33228 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/auto-345705/id_rsa Username:docker}
	I1018 09:46:05.419716  399440 ssh_runner.go:195] Run: systemctl --version
	I1018 09:46:05.426574  399440 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:46:05.460334  399440 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:46:05.464906  399440 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:46:05.464970  399440 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:46:05.489030  399440 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1018 09:46:05.489057  399440 start.go:495] detecting cgroup driver to use...
	I1018 09:46:05.489087  399440 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:46:05.489144  399440 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:46:05.504864  399440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:46:05.517071  399440 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:46:05.517120  399440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:46:05.532607  399440 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:46:05.549577  399440 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:46:05.631929  399440 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:46:05.717439  399440 docker.go:234] disabling docker service ...
	I1018 09:46:05.717505  399440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:46:05.736030  399440 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:46:05.749469  399440 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:46:05.832344  399440 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:46:05.918108  399440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:46:05.930781  399440 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:46:05.944889  399440 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:46:05.944940  399440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:05.954660  399440 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:46:05.954769  399440 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:05.963649  399440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:05.972128  399440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:05.980658  399440 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:46:05.988370  399440 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:05.996748  399440 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:06.009691  399440 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:06.018110  399440 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:46:06.025189  399440 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:46:06.032091  399440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:46:06.114062  399440 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:46:06.217759  399440 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:46:06.217859  399440 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:46:06.222269  399440 start.go:563] Will wait 60s for crictl version
	I1018 09:46:06.222340  399440 ssh_runner.go:195] Run: which crictl
	I1018 09:46:06.226151  399440 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:46:06.251784  399440 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:46:06.251875  399440 ssh_runner.go:195] Run: crio --version
	I1018 09:46:06.281417  399440 ssh_runner.go:195] Run: crio --version
	I1018 09:46:06.312942  399440 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1018 09:46:06.313938  399440 cli_runner.go:164] Run: docker network inspect auto-345705 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:46:06.332574  399440 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1018 09:46:06.336791  399440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:46:06.347258  399440 kubeadm.go:883] updating cluster {Name:auto-345705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-345705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:46:06.347404  399440 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:46:06.347473  399440 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:46:06.384211  399440 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:46:06.384239  399440 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:46:06.384284  399440 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:46:06.412194  399440 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:46:06.412215  399440 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:46:06.412223  399440 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1018 09:46:06.412314  399440 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-345705 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-345705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:46:06.412382  399440 ssh_runner.go:195] Run: crio config
	I1018 09:46:06.469531  399440 cni.go:84] Creating CNI manager for ""
	I1018 09:46:06.469562  399440 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:46:06.469586  399440 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:46:06.469617  399440 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-345705 NodeName:auto-345705 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:46:06.469757  399440 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-345705"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:46:06.469852  399440 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:46:06.478797  399440 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:46:06.478878  399440 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:46:06.486722  399440 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1018 09:46:06.500303  399440 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:46:06.520348  399440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1018 09:46:06.535616  399440 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:46:06.539574  399440 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:46:06.550101  399440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:46:06.642492  399440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:46:06.666755  399440 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705 for IP: 192.168.103.2
	I1018 09:46:06.666778  399440 certs.go:195] generating shared ca certs ...
	I1018 09:46:06.666804  399440 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:46:06.667044  399440 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:46:06.667104  399440 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:46:06.667117  399440 certs.go:257] generating profile certs ...
	I1018 09:46:06.667184  399440 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/client.key
	I1018 09:46:06.667208  399440 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/client.crt with IP's: []
	I1018 09:46:06.882344  399440 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/client.crt ...
	I1018 09:46:06.882373  399440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/client.crt: {Name:mk669dabc65b9b40dd9cae6466388b09791d9d03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:46:06.882554  399440 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/client.key ...
	I1018 09:46:06.882568  399440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/client.key: {Name:mk03021b8edcb2dafd1ca7468161daf949e6d176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:46:06.882680  399440 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/apiserver.key.841a9bd4
	I1018 09:46:06.882698  399440 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/apiserver.crt.841a9bd4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	W1018 09:46:04.624071  391061 pod_ready.go:104] pod "coredns-66bc5c9577-ksdf9" is not "Ready", error: <nil>
	W1018 09:46:06.626457  391061 pod_ready.go:104] pod "coredns-66bc5c9577-ksdf9" is not "Ready", error: <nil>
	I1018 09:46:02.799713  400675 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-942905" ...
	I1018 09:46:02.799786  400675 cli_runner.go:164] Run: docker start default-k8s-diff-port-942905
	I1018 09:46:03.070056  400675 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Status}}
	I1018 09:46:03.090459  400675 kic.go:430] container "default-k8s-diff-port-942905" state is running.
	I1018 09:46:03.090896  400675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-942905
	I1018 09:46:03.112853  400675 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/config.json ...
	I1018 09:46:03.113161  400675 machine.go:93] provisionDockerMachine start ...
	I1018 09:46:03.113237  400675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:46:03.134640  400675 main.go:141] libmachine: Using SSH client type: native
	I1018 09:46:03.134955  400675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33234 <nil> <nil>}
	I1018 09:46:03.134972  400675 main.go:141] libmachine: About to run SSH command:
	hostname
	I1018 09:46:03.135638  400675 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35566->127.0.0.1:33234: read: connection reset by peer
	I1018 09:46:06.272158  400675 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-942905
	
	I1018 09:46:06.272186  400675 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-942905"
	I1018 09:46:06.272248  400675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:46:06.290983  400675 main.go:141] libmachine: Using SSH client type: native
	I1018 09:46:06.291232  400675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33234 <nil> <nil>}
	I1018 09:46:06.291247  400675 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-942905 && echo "default-k8s-diff-port-942905" | sudo tee /etc/hostname
	I1018 09:46:06.440325  400675 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-942905
	
	I1018 09:46:06.440405  400675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:46:06.461121  400675 main.go:141] libmachine: Using SSH client type: native
	I1018 09:46:06.461438  400675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33234 <nil> <nil>}
	I1018 09:46:06.461475  400675 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-942905' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-942905/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-942905' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1018 09:46:06.602478  400675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1018 09:46:06.602507  400675 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21764-131066/.minikube CaCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21764-131066/.minikube}
	I1018 09:46:06.602542  400675 ubuntu.go:190] setting up certificates
	I1018 09:46:06.602553  400675 provision.go:84] configureAuth start
	I1018 09:46:06.602647  400675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-942905
	I1018 09:46:06.626806  400675 provision.go:143] copyHostCerts
	I1018 09:46:06.626900  400675 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem, removing ...
	I1018 09:46:06.626923  400675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem
	I1018 09:46:06.627010  400675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/ca.pem (1078 bytes)
	I1018 09:46:06.627155  400675 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem, removing ...
	I1018 09:46:06.627171  400675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem
	I1018 09:46:06.627224  400675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/cert.pem (1123 bytes)
	I1018 09:46:06.627322  400675 exec_runner.go:144] found /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem, removing ...
	I1018 09:46:06.627332  400675 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem
	I1018 09:46:06.627371  400675 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21764-131066/.minikube/key.pem (1679 bytes)
	I1018 09:46:06.627449  400675 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-942905 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-942905 localhost minikube]
	I1018 09:46:06.699268  400675 provision.go:177] copyRemoteCerts
	I1018 09:46:06.699344  400675 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1018 09:46:06.699393  400675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:46:06.722496  400675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:46:06.826424  400675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1018 09:46:06.846168  400675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1018 09:46:06.865877  400675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1018 09:46:06.884145  400675 provision.go:87] duration metric: took 281.576768ms to configureAuth
	I1018 09:46:06.884175  400675 ubuntu.go:206] setting minikube options for container-runtime
	I1018 09:46:06.884363  400675 config.go:182] Loaded profile config "default-k8s-diff-port-942905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:46:06.884462  400675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:46:06.904044  400675 main.go:141] libmachine: Using SSH client type: native
	I1018 09:46:06.904344  400675 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840040] 0x842d40 <nil>  [] 0s} 127.0.0.1 33234 <nil> <nil>}
	I1018 09:46:06.904373  400675 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1018 09:46:07.212271  400675 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1018 09:46:07.212300  400675 machine.go:96] duration metric: took 4.099120609s to provisionDockerMachine
	I1018 09:46:07.212314  400675 start.go:293] postStartSetup for "default-k8s-diff-port-942905" (driver="docker")
	I1018 09:46:07.212418  400675 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1018 09:46:07.212512  400675 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1018 09:46:07.212561  400675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:46:07.233460  400675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:46:07.330649  400675 ssh_runner.go:195] Run: cat /etc/os-release
	I1018 09:46:07.334421  400675 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1018 09:46:07.334446  400675 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1018 09:46:07.334456  400675 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/addons for local assets ...
	I1018 09:46:07.334537  400675 filesync.go:126] Scanning /home/jenkins/minikube-integration/21764-131066/.minikube/files for local assets ...
	I1018 09:46:07.334659  400675 filesync.go:149] local asset: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem -> 1346112.pem in /etc/ssl/certs
	I1018 09:46:07.334777  400675 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1018 09:46:07.342867  400675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:46:07.360397  400675 start.go:296] duration metric: took 147.987654ms for postStartSetup
	I1018 09:46:07.360478  400675 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:46:07.360522  400675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:46:07.378964  400675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:46:07.474321  400675 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1018 09:46:07.478869  400675 fix.go:56] duration metric: took 4.701435333s for fixHost
	I1018 09:46:07.478898  400675 start.go:83] releasing machines lock for "default-k8s-diff-port-942905", held for 4.701482684s
	I1018 09:46:07.478960  400675 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-942905
	I1018 09:46:07.497109  400675 ssh_runner.go:195] Run: cat /version.json
	I1018 09:46:07.497167  400675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:46:07.497207  400675 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1018 09:46:07.497274  400675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:46:07.516206  400675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:46:07.516949  400675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:46:07.577188  399440 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/apiserver.crt.841a9bd4 ...
	I1018 09:46:07.577215  399440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/apiserver.crt.841a9bd4: {Name:mkd2c733a2b1dcd03a19ae99f71c5545cd16c797 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:46:07.577380  399440 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/apiserver.key.841a9bd4 ...
	I1018 09:46:07.577397  399440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/apiserver.key.841a9bd4: {Name:mkbc74e4fd553e1936af35c601d0147eb4c84c86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:46:07.577471  399440 certs.go:382] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/apiserver.crt.841a9bd4 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/apiserver.crt
	I1018 09:46:07.577549  399440 certs.go:386] copying /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/apiserver.key.841a9bd4 -> /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/apiserver.key
	I1018 09:46:07.577604  399440 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/proxy-client.key
	I1018 09:46:07.577633  399440 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/proxy-client.crt with IP's: []
	I1018 09:46:07.792884  399440 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/proxy-client.crt ...
	I1018 09:46:07.792910  399440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/proxy-client.crt: {Name:mka28a671563c9312d3e63ea5a189654784e7fc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:46:07.793061  399440 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/proxy-client.key ...
	I1018 09:46:07.793072  399440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/proxy-client.key: {Name:mk4ce6a6f6de3fa8af6ef77e1fbfdab0ce745655 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:46:07.793293  399440 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:46:07.793350  399440 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:46:07.793364  399440 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:46:07.793385  399440 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:46:07.793412  399440 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:46:07.793454  399440 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:46:07.793515  399440 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:46:07.794324  399440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:46:07.815575  399440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:46:07.836516  399440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:46:07.854889  399440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:46:07.872912  399440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1018 09:46:07.891941  399440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:46:07.915466  399440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:46:07.932834  399440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/auto-345705/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:46:07.950245  399440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:46:07.969990  399440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:46:07.987572  399440 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:46:08.009337  399440 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:46:08.023262  399440 ssh_runner.go:195] Run: openssl version
	I1018 09:46:08.029074  399440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:46:08.037231  399440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:46:08.041176  399440 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:46:08.041228  399440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:46:08.075519  399440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:46:08.084500  399440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:46:08.096140  399440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:46:08.101522  399440 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:46:08.101654  399440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:46:08.138418  399440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:46:08.148579  399440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:46:08.158177  399440 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:46:08.162343  399440 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:46:08.162393  399440 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:46:08.201367  399440 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:46:08.210214  399440 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:46:08.214208  399440 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1018 09:46:08.214275  399440 kubeadm.go:400] StartCluster: {Name:auto-345705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-345705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:46:08.214382  399440 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:46:08.214444  399440 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:46:08.241922  399440 cri.go:89] found id: ""
	I1018 09:46:08.241988  399440 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:46:08.250121  399440 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:46:08.258269  399440 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:46:08.258327  399440 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:46:08.266926  399440 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:46:08.266947  399440 kubeadm.go:157] found existing configuration files:
	
	I1018 09:46:08.266996  399440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:46:08.274706  399440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:46:08.274751  399440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:46:08.282324  399440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:46:08.290945  399440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:46:08.291024  399440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:46:08.301415  399440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:46:08.310025  399440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:46:08.310108  399440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:46:08.317457  399440 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:46:08.325354  399440 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:46:08.325419  399440 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:46:08.333294  399440 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:46:08.374100  399440 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:46:08.374172  399440 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:46:08.395684  399440 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:46:08.395784  399440 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:46:08.395876  399440 kubeadm.go:318] OS: Linux
	I1018 09:46:08.395935  399440 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:46:08.396016  399440 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:46:08.396063  399440 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:46:08.396115  399440 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:46:08.396159  399440 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:46:08.396198  399440 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:46:08.396243  399440 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:46:08.396281  399440 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 09:46:08.463220  399440 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:46:08.463365  399440 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:46:08.463538  399440 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:46:08.472705  399440 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:46:07.608269  400675 ssh_runner.go:195] Run: systemctl --version
	I1018 09:46:07.676523  400675 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1018 09:46:07.712698  400675 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1018 09:46:07.717502  400675 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1018 09:46:07.717611  400675 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1018 09:46:07.725636  400675 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1018 09:46:07.725659  400675 start.go:495] detecting cgroup driver to use...
	I1018 09:46:07.725692  400675 detect.go:190] detected "systemd" cgroup driver on host os
	I1018 09:46:07.725733  400675 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1018 09:46:07.739841  400675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1018 09:46:07.753432  400675 docker.go:218] disabling cri-docker service (if available) ...
	I1018 09:46:07.753494  400675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1018 09:46:07.769472  400675 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1018 09:46:07.782155  400675 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1018 09:46:07.867004  400675 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1018 09:46:07.950290  400675 docker.go:234] disabling docker service ...
	I1018 09:46:07.950351  400675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1018 09:46:07.964746  400675 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1018 09:46:07.977114  400675 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1018 09:46:08.061441  400675 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1018 09:46:08.147070  400675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1018 09:46:08.161328  400675 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1018 09:46:08.177048  400675 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1018 09:46:08.177109  400675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:08.186945  400675 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1018 09:46:08.187013  400675 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:08.195928  400675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:08.204936  400675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:08.214307  400675 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1018 09:46:08.222396  400675 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:08.231703  400675 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:08.241086  400675 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1018 09:46:08.250284  400675 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1018 09:46:08.258157  400675 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1018 09:46:08.265887  400675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:46:08.350411  400675 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1018 09:46:08.466698  400675 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1018 09:46:08.466767  400675 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1018 09:46:08.471647  400675 start.go:563] Will wait 60s for crictl version
	I1018 09:46:08.471709  400675 ssh_runner.go:195] Run: which crictl
	I1018 09:46:08.475588  400675 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1018 09:46:08.501130  400675 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1018 09:46:08.501231  400675 ssh_runner.go:195] Run: crio --version
	I1018 09:46:08.529975  400675 ssh_runner.go:195] Run: crio --version
	I1018 09:46:08.558806  400675 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1018 09:46:03.668799  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:46:03.668857  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:46:03.668875  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:03.702912  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:46:03.702948  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:03.759379  353123 logs.go:123] Gathering logs for kube-controller-manager [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88] ...
	I1018 09:46:03.759415  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:03.787450  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:46:03.787478  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:46:06.346895  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:46:06.347310  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:46:06.347357  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:46:06.347410  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:46:06.377522  353123 cri.go:89] found id: "19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:06.377545  353123 cri.go:89] found id: ""
	I1018 09:46:06.377553  353123 logs.go:282] 1 containers: [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03]
	I1018 09:46:06.377601  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:06.381717  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:46:06.381788  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:46:06.411551  353123 cri.go:89] found id: ""
	I1018 09:46:06.411580  353123 logs.go:282] 0 containers: []
	W1018 09:46:06.411588  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:46:06.411596  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:46:06.411652  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:46:06.443271  353123 cri.go:89] found id: ""
	I1018 09:46:06.443298  353123 logs.go:282] 0 containers: []
	W1018 09:46:06.443310  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:46:06.443319  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:46:06.443378  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:46:06.474187  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:06.474209  353123 cri.go:89] found id: ""
	I1018 09:46:06.474219  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:46:06.474276  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:06.478317  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:46:06.478379  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:46:06.506053  353123 cri.go:89] found id: ""
	I1018 09:46:06.506081  353123 logs.go:282] 0 containers: []
	W1018 09:46:06.506090  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:46:06.506097  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:46:06.506161  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:46:06.535731  353123 cri.go:89] found id: "b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:06.535753  353123 cri.go:89] found id: ""
	I1018 09:46:06.535763  353123 logs.go:282] 1 containers: [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88]
	I1018 09:46:06.535838  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:06.539678  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:46:06.539732  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:46:06.568277  353123 cri.go:89] found id: ""
	I1018 09:46:06.568311  353123 logs.go:282] 0 containers: []
	W1018 09:46:06.568322  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:46:06.568330  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:46:06.568387  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:46:06.601832  353123 cri.go:89] found id: ""
	I1018 09:46:06.601861  353123 logs.go:282] 0 containers: []
	W1018 09:46:06.601871  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:46:06.601881  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:46:06.601893  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:46:06.624739  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:46:06.624778  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:46:06.694875  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:46:06.694899  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:46:06.694917  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:06.738534  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:46:06.738587  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:06.804042  353123 logs.go:123] Gathering logs for kube-controller-manager [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88] ...
	I1018 09:46:06.804083  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:06.833841  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:46:06.833872  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:46:06.889383  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:46:06.889414  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:46:06.925300  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:46:06.925327  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:46:08.559916  400675 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-942905 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1018 09:46:08.576933  400675 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1018 09:46:08.581219  400675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:46:08.591515  400675 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-942905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-942905 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1018 09:46:08.591657  400675 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:46:08.591713  400675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:46:08.627637  400675 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:46:08.627660  400675 crio.go:433] Images already preloaded, skipping extraction
	I1018 09:46:08.627714  400675 ssh_runner.go:195] Run: sudo crictl images --output json
	I1018 09:46:08.654024  400675 crio.go:514] all images are preloaded for cri-o runtime.
	I1018 09:46:08.654048  400675 cache_images.go:85] Images are preloaded, skipping loading
	I1018 09:46:08.654055  400675 kubeadm.go:934] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1018 09:46:08.654147  400675 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-942905 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-942905 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1018 09:46:08.654212  400675 ssh_runner.go:195] Run: crio config
	I1018 09:46:08.706869  400675 cni.go:84] Creating CNI manager for ""
	I1018 09:46:08.706897  400675 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:46:08.706914  400675 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1018 09:46:08.706952  400675 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-942905 NodeName:default-k8s-diff-port-942905 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1018 09:46:08.707088  400675 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-942905"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1018 09:46:08.707154  400675 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1018 09:46:08.716072  400675 binaries.go:44] Found k8s binaries, skipping transfer
	I1018 09:46:08.716160  400675 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1018 09:46:08.724411  400675 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1018 09:46:08.738106  400675 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1018 09:46:08.750966  400675 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1018 09:46:08.763709  400675 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1018 09:46:08.767429  400675 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1018 09:46:08.777123  400675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:46:08.856898  400675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:46:08.883644  400675 certs.go:69] Setting up /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905 for IP: 192.168.94.2
	I1018 09:46:08.883668  400675 certs.go:195] generating shared ca certs ...
	I1018 09:46:08.883692  400675 certs.go:227] acquiring lock for ca certs: {Name:mkb303859252ea1aa4f6e2d387f7b915302cc278 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:46:08.883886  400675 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key
	I1018 09:46:08.883961  400675 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key
	I1018 09:46:08.883976  400675 certs.go:257] generating profile certs ...
	I1018 09:46:08.884083  400675 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/client.key
	I1018 09:46:08.884169  400675 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.key.cb5a57ca
	I1018 09:46:08.884221  400675 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.key
	I1018 09:46:08.884356  400675 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem (1338 bytes)
	W1018 09:46:08.884406  400675 certs.go:480] ignoring /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611_empty.pem, impossibly tiny 0 bytes
	I1018 09:46:08.884420  400675 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca-key.pem (1675 bytes)
	I1018 09:46:08.884451  400675 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/ca.pem (1078 bytes)
	I1018 09:46:08.884479  400675 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/cert.pem (1123 bytes)
	I1018 09:46:08.884521  400675 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/certs/key.pem (1679 bytes)
	I1018 09:46:08.884599  400675 certs.go:484] found cert: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem (1708 bytes)
	I1018 09:46:08.885461  400675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1018 09:46:08.908626  400675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1018 09:46:08.931170  400675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1018 09:46:08.950953  400675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1018 09:46:08.977505  400675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1018 09:46:08.996101  400675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1018 09:46:09.012572  400675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1018 09:46:09.029308  400675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/default-k8s-diff-port-942905/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1018 09:46:09.045801  400675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/certs/134611.pem --> /usr/share/ca-certificates/134611.pem (1338 bytes)
	I1018 09:46:09.062364  400675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/ssl/certs/1346112.pem --> /usr/share/ca-certificates/1346112.pem (1708 bytes)
	I1018 09:46:09.079347  400675 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1018 09:46:09.097135  400675 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1018 09:46:09.109613  400675 ssh_runner.go:195] Run: openssl version
	I1018 09:46:09.115665  400675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134611.pem && ln -fs /usr/share/ca-certificates/134611.pem /etc/ssl/certs/134611.pem"
	I1018 09:46:09.124549  400675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134611.pem
	I1018 09:46:09.128313  400675 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 18 09:05 /usr/share/ca-certificates/134611.pem
	I1018 09:46:09.128360  400675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134611.pem
	I1018 09:46:09.162004  400675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134611.pem /etc/ssl/certs/51391683.0"
	I1018 09:46:09.170556  400675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1346112.pem && ln -fs /usr/share/ca-certificates/1346112.pem /etc/ssl/certs/1346112.pem"
	I1018 09:46:09.179215  400675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1346112.pem
	I1018 09:46:09.183243  400675 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 18 09:05 /usr/share/ca-certificates/1346112.pem
	I1018 09:46:09.183299  400675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1346112.pem
	I1018 09:46:09.219570  400675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1346112.pem /etc/ssl/certs/3ec20f2e.0"
	I1018 09:46:09.228036  400675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1018 09:46:09.236795  400675 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:46:09.240544  400675 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 18 08:58 /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:46:09.240597  400675 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1018 09:46:09.276308  400675 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1018 09:46:09.284931  400675 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1018 09:46:09.288831  400675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1018 09:46:09.323900  400675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1018 09:46:09.358929  400675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1018 09:46:09.394353  400675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1018 09:46:09.443627  400675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1018 09:46:09.483749  400675 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1018 09:46:09.539278  400675 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-942905 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-942905 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:46:09.539388  400675 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1018 09:46:09.539453  400675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1018 09:46:09.581973  400675 cri.go:89] found id: "53c162813a56d295f5c9bcb964babffaba1ef65c7c7abd379dcda49590ad1624"
	I1018 09:46:09.582022  400675 cri.go:89] found id: "c1d5522dfa9c2b152efa910b995d7591a777193dd2a5d91b03598fa2e0d960d7"
	I1018 09:46:09.582032  400675 cri.go:89] found id: "064212d5e2e85f534b67da4cce1414ed832093be45a30363299ae9169f550be2"
	I1018 09:46:09.582037  400675 cri.go:89] found id: "776062d447e4140eaa670ac1d98115ec30e1134f45a3a41e47e11885ee45e152"
	I1018 09:46:09.582041  400675 cri.go:89] found id: ""
	I1018 09:46:09.582141  400675 ssh_runner.go:195] Run: sudo runc list -f json
	W1018 09:46:09.596530  400675 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:46:09Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:46:09.596603  400675 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1018 09:46:09.605567  400675 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1018 09:46:09.605594  400675 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1018 09:46:09.605647  400675 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1018 09:46:09.614634  400675 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:46:09.616002  400675 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-942905" does not appear in /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:46:09.616789  400675 kubeconfig.go:62] /home/jenkins/minikube-integration/21764-131066/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-942905" cluster setting kubeconfig missing "default-k8s-diff-port-942905" context setting]
	I1018 09:46:09.618095  400675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:46:09.620233  400675 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1018 09:46:09.634164  400675 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1018 09:46:09.634204  400675 kubeadm.go:601] duration metric: took 28.602765ms to restartPrimaryControlPlane
	I1018 09:46:09.634214  400675 kubeadm.go:402] duration metric: took 94.951127ms to StartCluster
	I1018 09:46:09.634236  400675 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:46:09.634298  400675 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:46:09.636433  400675 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:46:09.636691  400675 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:46:09.636868  400675 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:46:09.636969  400675 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-942905"
	I1018 09:46:09.636988  400675 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-942905"
	W1018 09:46:09.636997  400675 addons.go:247] addon storage-provisioner should already be in state true
	I1018 09:46:09.637040  400675 config.go:182] Loaded profile config "default-k8s-diff-port-942905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:46:09.637047  400675 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-942905"
	I1018 09:46:09.637050  400675 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-942905"
	I1018 09:46:09.637064  400675 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-942905"
	W1018 09:46:09.637072  400675 addons.go:247] addon dashboard should already be in state true
	I1018 09:46:09.637071  400675 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-942905"
	I1018 09:46:09.637098  400675 host.go:66] Checking if "default-k8s-diff-port-942905" exists ...
	I1018 09:46:09.637413  400675 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Status}}
	I1018 09:46:09.637575  400675 host.go:66] Checking if "default-k8s-diff-port-942905" exists ...
	I1018 09:46:09.637577  400675 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Status}}
	I1018 09:46:09.638092  400675 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Status}}
	I1018 09:46:09.639317  400675 out.go:179] * Verifying Kubernetes components...
	I1018 09:46:09.645123  400675 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:46:09.667984  400675 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-942905"
	W1018 09:46:09.668058  400675 addons.go:247] addon default-storageclass should already be in state true
	I1018 09:46:09.668099  400675 host.go:66] Checking if "default-k8s-diff-port-942905" exists ...
	I1018 09:46:09.668595  400675 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Status}}
	I1018 09:46:09.670357  400675 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1018 09:46:09.671018  400675 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:46:09.671977  400675 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:46:09.672000  400675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:46:09.672024  400675 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1018 09:46:08.474729  399440 out.go:252]   - Generating certificates and keys ...
	I1018 09:46:08.474869  399440 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:46:08.474986  399440 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:46:08.978175  399440 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:46:09.189235  399440 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:46:09.303849  399440 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:46:09.655252  399440 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:46:09.801195  399440 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:46:09.801519  399440 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [auto-345705 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1018 09:46:10.153867  399440 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:46:10.154192  399440 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [auto-345705 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1018 09:46:10.440642  399440 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:46:10.698041  399440 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:46:11.069163  399440 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:46:11.069283  399440 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:46:11.467877  399440 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:46:11.622850  399440 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:46:09.672054  400675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:46:09.672904  400675 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1018 09:46:09.672927  400675 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1018 09:46:09.672987  400675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:46:09.715979  400675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:46:09.718181  400675 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:46:09.718210  400675 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:46:09.718265  400675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:46:09.718108  400675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:46:09.748458  400675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:46:09.831640  400675 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:46:09.848852  400675 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-942905" to be "Ready" ...
	I1018 09:46:09.858123  400675 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1018 09:46:09.858149  400675 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1018 09:46:09.866287  400675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:46:09.883894  400675 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1018 09:46:09.883964  400675 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1018 09:46:09.884349  400675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:46:09.901763  400675 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1018 09:46:09.901786  400675 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1018 09:46:09.921722  400675 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1018 09:46:09.921744  400675 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1018 09:46:09.948139  400675 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1018 09:46:09.948166  400675 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1018 09:46:09.966564  400675 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1018 09:46:09.966586  400675 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1018 09:46:09.979804  400675 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1018 09:46:09.979847  400675 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1018 09:46:09.996170  400675 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1018 09:46:09.996198  400675 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1018 09:46:10.015530  400675 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:46:10.015557  400675 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1018 09:46:10.031752  400675 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1018 09:46:11.687029  400675 node_ready.go:49] node "default-k8s-diff-port-942905" is "Ready"
	I1018 09:46:11.687068  400675 node_ready.go:38] duration metric: took 1.838167307s for node "default-k8s-diff-port-942905" to be "Ready" ...
	I1018 09:46:11.687086  400675 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:46:11.687141  400675 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:46:12.266920  400675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.400590645s)
	I1018 09:46:12.266968  400675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.382561094s)
	I1018 09:46:12.267063  400675 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.235277176s)
	I1018 09:46:12.267277  400675 api_server.go:72] duration metric: took 2.630552866s to wait for apiserver process to appear ...
	I1018 09:46:12.267298  400675 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:46:12.267316  400675 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1018 09:46:12.270870  400675 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-942905 addons enable metrics-server
	
	I1018 09:46:12.272609  400675 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:46:12.272651  400675 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:46:12.274488  400675 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	W1018 09:46:09.124317  391061 pod_ready.go:104] pod "coredns-66bc5c9577-ksdf9" is not "Ready", error: <nil>
	W1018 09:46:11.125999  391061 pod_ready.go:104] pod "coredns-66bc5c9577-ksdf9" is not "Ready", error: <nil>
	I1018 09:46:12.098028  399440 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:46:12.223996  399440 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:46:12.414357  399440 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:46:12.414901  399440 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:46:12.419647  399440 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:46:12.275743  400675 addons.go:514] duration metric: took 2.638892263s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1018 09:46:09.516711  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:46:09.517249  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:46:09.517325  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:46:09.517404  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:46:09.557300  353123 cri.go:89] found id: "19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:09.557320  353123 cri.go:89] found id: ""
	I1018 09:46:09.557329  353123 logs.go:282] 1 containers: [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03]
	I1018 09:46:09.557389  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:09.563597  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:46:09.563765  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:46:09.598049  353123 cri.go:89] found id: ""
	I1018 09:46:09.598098  353123 logs.go:282] 0 containers: []
	W1018 09:46:09.598108  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:46:09.598116  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:46:09.598247  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:46:09.634927  353123 cri.go:89] found id: ""
	I1018 09:46:09.634949  353123 logs.go:282] 0 containers: []
	W1018 09:46:09.634958  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:46:09.634966  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:46:09.635020  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:46:09.691623  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:09.691648  353123 cri.go:89] found id: ""
	I1018 09:46:09.691659  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:46:09.691713  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:09.701385  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:46:09.701517  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:46:09.750036  353123 cri.go:89] found id: ""
	I1018 09:46:09.750064  353123 logs.go:282] 0 containers: []
	W1018 09:46:09.750075  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:46:09.750083  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:46:09.750137  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:46:09.788543  353123 cri.go:89] found id: "b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:09.788576  353123 cri.go:89] found id: ""
	I1018 09:46:09.788587  353123 logs.go:282] 1 containers: [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88]
	I1018 09:46:09.788640  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:09.793956  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:46:09.794027  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:46:09.832135  353123 cri.go:89] found id: ""
	I1018 09:46:09.832161  353123 logs.go:282] 0 containers: []
	W1018 09:46:09.832178  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:46:09.832186  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:46:09.832241  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:46:09.872078  353123 cri.go:89] found id: ""
	I1018 09:46:09.872104  353123 logs.go:282] 0 containers: []
	W1018 09:46:09.872114  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:46:09.872125  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:46:09.872138  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:46:10.035349  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:46:10.035382  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:46:10.062089  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:46:10.062128  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:46:10.143371  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:46:10.143393  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:46:10.143410  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:10.187680  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:46:10.187718  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:10.271476  353123 logs.go:123] Gathering logs for kube-controller-manager [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88] ...
	I1018 09:46:10.271531  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:10.307627  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:46:10.307660  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:46:10.384196  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:46:10.384235  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:46:12.924328  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:46:12.924786  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:46:12.924879  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:46:12.924935  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:46:12.953428  353123 cri.go:89] found id: "19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:12.953455  353123 cri.go:89] found id: ""
	I1018 09:46:12.953466  353123 logs.go:282] 1 containers: [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03]
	I1018 09:46:12.953526  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:12.957428  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:46:12.957493  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:46:12.985148  353123 cri.go:89] found id: ""
	I1018 09:46:12.985174  353123 logs.go:282] 0 containers: []
	W1018 09:46:12.985185  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:46:12.985193  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:46:12.985250  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:46:13.012316  353123 cri.go:89] found id: ""
	I1018 09:46:13.012339  353123 logs.go:282] 0 containers: []
	W1018 09:46:13.012346  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:46:13.012358  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:46:13.012412  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:46:13.048188  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:13.048214  353123 cri.go:89] found id: ""
	I1018 09:46:13.048225  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:46:13.048285  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:13.054040  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:46:13.054121  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:46:13.091796  353123 cri.go:89] found id: ""
	I1018 09:46:13.091853  353123 logs.go:282] 0 containers: []
	W1018 09:46:13.091866  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:46:13.091875  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:46:13.092080  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:46:13.128547  353123 cri.go:89] found id: "b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:13.128674  353123 cri.go:89] found id: ""
	I1018 09:46:13.128688  353123 logs.go:282] 1 containers: [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88]
	I1018 09:46:13.128764  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:13.133137  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:46:13.133204  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:46:13.160196  353123 cri.go:89] found id: ""
	I1018 09:46:13.160225  353123 logs.go:282] 0 containers: []
	W1018 09:46:13.160237  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:46:13.160246  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:46:13.160308  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:46:13.189122  353123 cri.go:89] found id: ""
	I1018 09:46:13.189148  353123 logs.go:282] 0 containers: []
	W1018 09:46:13.189156  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:46:13.189165  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:46:13.189182  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:46:13.242036  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:46:13.242071  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:46:13.271898  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:46:13.271929  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:46:13.368864  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:46:13.368899  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:46:13.388348  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:46:13.388374  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:46:13.443509  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:46:13.443529  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:46:13.443543  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:13.477102  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:46:13.477132  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:13.533657  353123 logs.go:123] Gathering logs for kube-controller-manager [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88] ...
	I1018 09:46:13.533692  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:12.420975  399440 out.go:252]   - Booting up control plane ...
	I1018 09:46:12.421101  399440 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:46:12.421225  399440 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:46:12.421810  399440 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:46:12.435321  399440 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:46:12.435436  399440 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:46:12.443072  399440 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:46:12.443418  399440 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:46:12.443475  399440 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:46:12.544233  399440 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:46:12.544362  399440 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:46:13.545244  399440 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001027132s
	I1018 09:46:13.549774  399440 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:46:13.549926  399440 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1018 09:46:13.550057  399440 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:46:13.550207  399440 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:46:14.472157  399440 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 922.237454ms
	I1018 09:46:15.358476  399440 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.808589653s
	I1018 09:46:17.051339  399440 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501466751s
	I1018 09:46:17.065013  399440 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:46:17.076775  399440 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:46:17.087866  399440 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:46:17.088192  399440 kubeadm.go:318] [mark-control-plane] Marking the node auto-345705 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:46:17.098205  399440 kubeadm.go:318] [bootstrap-token] Using token: elka73.23q5uinl2stq819x
	W1018 09:46:13.625715  391061 pod_ready.go:104] pod "coredns-66bc5c9577-ksdf9" is not "Ready", error: <nil>
	W1018 09:46:16.125032  391061 pod_ready.go:104] pod "coredns-66bc5c9577-ksdf9" is not "Ready", error: <nil>
	I1018 09:46:12.768376  400675 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1018 09:46:12.773813  400675 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1018 09:46:12.773862  400675 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1018 09:46:13.267481  400675 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1018 09:46:13.272009  400675 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1018 09:46:13.273099  400675 api_server.go:141] control plane version: v1.34.1
	I1018 09:46:13.273123  400675 api_server.go:131] duration metric: took 1.005817912s to wait for apiserver health ...
	I1018 09:46:13.273134  400675 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:46:13.276229  400675 system_pods.go:59] 8 kube-system pods found
	I1018 09:46:13.276271  400675 system_pods.go:61] "coredns-66bc5c9577-g6bf9" [e1cba89a-b3da-49cd-9f36-7fcbad7a969d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:46:13.276282  400675 system_pods.go:61] "etcd-default-k8s-diff-port-942905" [1530bce6-93b1-4508-b07e-8abb187870cf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:46:13.276303  400675 system_pods.go:61] "kindnet-xtmcm" [009f3589-2a75-43d6-8bf7-d80c5147bc32] Running
	I1018 09:46:13.276314  400675 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-942905" [8ace7000-b079-4e5a-88c8-15d9ae900acd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:46:13.276326  400675 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-942905" [18fdee62-59a5-403e-850a-47ae2b52c60f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:46:13.276336  400675 system_pods.go:61] "kube-proxy-x9fjs" [16ec7433-66c9-48fb-bd90-244a1b7986d7] Running
	I1018 09:46:13.276344  400675 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-942905" [4bcc4f95-9e1d-42bf-afe1-4730f5589b18] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:46:13.276350  400675 system_pods.go:61] "storage-provisioner" [2ede4817-c456-41e7-a9f5-4495deed70de] Running
	I1018 09:46:13.276359  400675 system_pods.go:74] duration metric: took 3.217626ms to wait for pod list to return data ...
	I1018 09:46:13.276374  400675 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:46:13.278723  400675 default_sa.go:45] found service account: "default"
	I1018 09:46:13.278745  400675 default_sa.go:55] duration metric: took 2.36389ms for default service account to be created ...
	I1018 09:46:13.278756  400675 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:46:13.281293  400675 system_pods.go:86] 8 kube-system pods found
	I1018 09:46:13.281319  400675 system_pods.go:89] "coredns-66bc5c9577-g6bf9" [e1cba89a-b3da-49cd-9f36-7fcbad7a969d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:46:13.281329  400675 system_pods.go:89] "etcd-default-k8s-diff-port-942905" [1530bce6-93b1-4508-b07e-8abb187870cf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1018 09:46:13.281334  400675 system_pods.go:89] "kindnet-xtmcm" [009f3589-2a75-43d6-8bf7-d80c5147bc32] Running
	I1018 09:46:13.281340  400675 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-942905" [8ace7000-b079-4e5a-88c8-15d9ae900acd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1018 09:46:13.281349  400675 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-942905" [18fdee62-59a5-403e-850a-47ae2b52c60f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1018 09:46:13.281362  400675 system_pods.go:89] "kube-proxy-x9fjs" [16ec7433-66c9-48fb-bd90-244a1b7986d7] Running
	I1018 09:46:13.281370  400675 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-942905" [4bcc4f95-9e1d-42bf-afe1-4730f5589b18] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1018 09:46:13.281375  400675 system_pods.go:89] "storage-provisioner" [2ede4817-c456-41e7-a9f5-4495deed70de] Running
	I1018 09:46:13.281381  400675 system_pods.go:126] duration metric: took 2.619608ms to wait for k8s-apps to be running ...
	I1018 09:46:13.281389  400675 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:46:13.281436  400675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:46:13.295598  400675 system_svc.go:56] duration metric: took 14.199314ms WaitForService to wait for kubelet
	I1018 09:46:13.295629  400675 kubeadm.go:586] duration metric: took 3.658908098s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:46:13.295649  400675 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:46:13.298190  400675 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:46:13.298213  400675 node_conditions.go:123] node cpu capacity is 8
	I1018 09:46:13.298225  400675 node_conditions.go:105] duration metric: took 2.571949ms to run NodePressure ...
	I1018 09:46:13.298239  400675 start.go:241] waiting for startup goroutines ...
	I1018 09:46:13.298245  400675 start.go:246] waiting for cluster config update ...
	I1018 09:46:13.298255  400675 start.go:255] writing updated cluster config ...
	I1018 09:46:13.298486  400675 ssh_runner.go:195] Run: rm -f paused
	I1018 09:46:13.302073  400675 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:46:13.305808  400675 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-g6bf9" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:46:15.311237  400675 pod_ready.go:104] pod "coredns-66bc5c9577-g6bf9" is not "Ready", error: <nil>
	W1018 09:46:17.313029  400675 pod_ready.go:104] pod "coredns-66bc5c9577-g6bf9" is not "Ready", error: <nil>
	I1018 09:46:17.099861  399440 out.go:252]   - Configuring RBAC rules ...
	I1018 09:46:17.099967  399440 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:46:17.105268  399440 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:46:17.110850  399440 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:46:17.114861  399440 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:46:17.117507  399440 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:46:17.120047  399440 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:46:17.460335  399440 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:46:17.874735  399440 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:46:18.457474  399440 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:46:18.458359  399440 kubeadm.go:318] 
	I1018 09:46:18.458464  399440 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:46:18.458486  399440 kubeadm.go:318] 
	I1018 09:46:18.458554  399440 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:46:18.458577  399440 kubeadm.go:318] 
	I1018 09:46:18.458609  399440 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:46:18.458681  399440 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:46:18.458776  399440 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:46:18.458798  399440 kubeadm.go:318] 
	I1018 09:46:18.458895  399440 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:46:18.458903  399440 kubeadm.go:318] 
	I1018 09:46:18.458976  399440 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:46:18.458988  399440 kubeadm.go:318] 
	I1018 09:46:18.459060  399440 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:46:18.459163  399440 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:46:18.459249  399440 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:46:18.459257  399440 kubeadm.go:318] 
	I1018 09:46:18.459361  399440 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:46:18.459439  399440 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:46:18.459445  399440 kubeadm.go:318] 
	I1018 09:46:18.459527  399440 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token elka73.23q5uinl2stq819x \
	I1018 09:46:18.459622  399440 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f \
	I1018 09:46:18.459651  399440 kubeadm.go:318] 	--control-plane 
	I1018 09:46:18.459660  399440 kubeadm.go:318] 
	I1018 09:46:18.459757  399440 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:46:18.459767  399440 kubeadm.go:318] 
	I1018 09:46:18.459875  399440 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token elka73.23q5uinl2stq819x \
	I1018 09:46:18.459992  399440 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f 
	I1018 09:46:18.463274  399440 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:46:18.463471  399440 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1018 09:46:18.463498  399440 cni.go:84] Creating CNI manager for ""
	I1018 09:46:18.463509  399440 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:46:18.465792  399440 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1018 09:46:16.067903  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:46:16.068381  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:46:16.068432  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:46:16.068484  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:46:16.095475  353123 cri.go:89] found id: "19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:16.095504  353123 cri.go:89] found id: ""
	I1018 09:46:16.095513  353123 logs.go:282] 1 containers: [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03]
	I1018 09:46:16.095576  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:16.099527  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:46:16.099599  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:46:16.125671  353123 cri.go:89] found id: ""
	I1018 09:46:16.125699  353123 logs.go:282] 0 containers: []
	W1018 09:46:16.125710  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:46:16.125718  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:46:16.125781  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:46:16.151732  353123 cri.go:89] found id: ""
	I1018 09:46:16.151763  353123 logs.go:282] 0 containers: []
	W1018 09:46:16.151773  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:46:16.151780  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:46:16.151850  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:46:16.178218  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:16.178246  353123 cri.go:89] found id: ""
	I1018 09:46:16.178258  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:46:16.178329  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:16.182661  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:46:16.182727  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:46:16.213515  353123 cri.go:89] found id: ""
	I1018 09:46:16.213545  353123 logs.go:282] 0 containers: []
	W1018 09:46:16.213557  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:46:16.213565  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:46:16.213630  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:46:16.249456  353123 cri.go:89] found id: "b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:16.249480  353123 cri.go:89] found id: ""
	I1018 09:46:16.249491  353123 logs.go:282] 1 containers: [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88]
	I1018 09:46:16.249555  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:16.254851  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:46:16.254922  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:46:16.289558  353123 cri.go:89] found id: ""
	I1018 09:46:16.289589  353123 logs.go:282] 0 containers: []
	W1018 09:46:16.289608  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:46:16.289623  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:46:16.289691  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:46:16.326096  353123 cri.go:89] found id: ""
	I1018 09:46:16.326124  353123 logs.go:282] 0 containers: []
	W1018 09:46:16.326136  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:46:16.326147  353123 logs.go:123] Gathering logs for kube-controller-manager [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88] ...
	I1018 09:46:16.326163  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:16.359215  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:46:16.359248  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:46:16.422070  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:46:16.422116  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:46:16.464770  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:46:16.464842  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:46:16.601523  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:46:16.601563  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:46:16.629581  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:46:16.629615  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:46:16.713344  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:46:16.713368  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:46:16.713383  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:16.762577  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:46:16.762627  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:18.466908  399440 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1018 09:46:18.471046  399440 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1018 09:46:18.471067  399440 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1018 09:46:18.484592  399440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1018 09:46:18.706932  399440 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1018 09:46:18.707016  399440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:46:18.707052  399440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-345705 minikube.k8s.io/updated_at=2025_10_18T09_46_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89 minikube.k8s.io/name=auto-345705 minikube.k8s.io/primary=true
	I1018 09:46:18.794095  399440 ops.go:34] apiserver oom_adj: -16
	I1018 09:46:18.794185  399440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:46:19.295076  399440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:46:19.794538  399440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:46:20.294843  399440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:46:20.795031  399440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:46:21.295047  399440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:46:21.794951  399440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1018 09:46:18.623908  391061 pod_ready.go:104] pod "coredns-66bc5c9577-ksdf9" is not "Ready", error: <nil>
	W1018 09:46:20.625598  391061 pod_ready.go:104] pod "coredns-66bc5c9577-ksdf9" is not "Ready", error: <nil>
	I1018 09:46:21.624191  391061 pod_ready.go:94] pod "coredns-66bc5c9577-ksdf9" is "Ready"
	I1018 09:46:21.624220  391061 pod_ready.go:86] duration metric: took 39.005996274s for pod "coredns-66bc5c9577-ksdf9" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:21.627147  391061 pod_ready.go:83] waiting for pod "etcd-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:21.631707  391061 pod_ready.go:94] pod "etcd-embed-certs-055175" is "Ready"
	I1018 09:46:21.631739  391061 pod_ready.go:86] duration metric: took 4.567885ms for pod "etcd-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:21.634006  391061 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:21.638574  391061 pod_ready.go:94] pod "kube-apiserver-embed-certs-055175" is "Ready"
	I1018 09:46:21.638601  391061 pod_ready.go:86] duration metric: took 4.570218ms for pod "kube-apiserver-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:21.640671  391061 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:21.822478  391061 pod_ready.go:94] pod "kube-controller-manager-embed-certs-055175" is "Ready"
	I1018 09:46:21.822509  391061 pod_ready.go:86] duration metric: took 181.813975ms for pod "kube-controller-manager-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:22.021903  391061 pod_ready.go:83] waiting for pod "kube-proxy-9n98q" in "kube-system" namespace to be "Ready" or be gone ...
	W1018 09:46:19.813242  400675 pod_ready.go:104] pod "coredns-66bc5c9577-g6bf9" is not "Ready", error: <nil>
	W1018 09:46:21.813811  400675 pod_ready.go:104] pod "coredns-66bc5c9577-g6bf9" is not "Ready", error: <nil>
	I1018 09:46:22.422117  391061 pod_ready.go:94] pod "kube-proxy-9n98q" is "Ready"
	I1018 09:46:22.422144  391061 pod_ready.go:86] duration metric: took 400.212098ms for pod "kube-proxy-9n98q" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:22.622947  391061 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:23.022172  391061 pod_ready.go:94] pod "kube-scheduler-embed-certs-055175" is "Ready"
	I1018 09:46:23.022200  391061 pod_ready.go:86] duration metric: took 399.226749ms for pod "kube-scheduler-embed-certs-055175" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:23.022212  391061 pod_ready.go:40] duration metric: took 40.407311311s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:46:23.076008  391061 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:46:23.079960  391061 out.go:179] * Done! kubectl is now configured to use "embed-certs-055175" cluster and "default" namespace by default
	I1018 09:46:22.294780  399440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:46:22.794500  399440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:46:23.294459  399440 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1018 09:46:23.371814  399440 kubeadm.go:1113] duration metric: took 4.664858244s to wait for elevateKubeSystemPrivileges
	I1018 09:46:23.371878  399440 kubeadm.go:402] duration metric: took 15.157609793s to StartCluster
	I1018 09:46:23.371904  399440 settings.go:142] acquiring lock: {Name:mkc658649f6435cf0a6997f0a764ff80a96d6138 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:46:23.371983  399440 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:46:23.374749  399440 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/kubeconfig: {Name:mk9e1fe660cc91a2dd3b21b320dee7019fd0d654 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:46:23.375188  399440 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:46:23.375240  399440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1018 09:46:23.375291  399440 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1018 09:46:23.375372  399440 addons.go:69] Setting storage-provisioner=true in profile "auto-345705"
	I1018 09:46:23.375390  399440 addons.go:238] Setting addon storage-provisioner=true in "auto-345705"
	I1018 09:46:23.375418  399440 host.go:66] Checking if "auto-345705" exists ...
	I1018 09:46:23.375427  399440 config.go:182] Loaded profile config "auto-345705": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:46:23.375476  399440 addons.go:69] Setting default-storageclass=true in profile "auto-345705"
	I1018 09:46:23.375492  399440 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-345705"
	I1018 09:46:23.375900  399440 cli_runner.go:164] Run: docker container inspect auto-345705 --format={{.State.Status}}
	I1018 09:46:23.376044  399440 cli_runner.go:164] Run: docker container inspect auto-345705 --format={{.State.Status}}
	I1018 09:46:23.377307  399440 out.go:179] * Verifying Kubernetes components...
	I1018 09:46:23.378808  399440 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1018 09:46:23.406503  399440 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1018 09:46:19.337356  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:46:19.337902  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:46:19.337974  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:46:19.338054  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:46:19.367236  353123 cri.go:89] found id: "19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:19.367257  353123 cri.go:89] found id: ""
	I1018 09:46:19.367267  353123 logs.go:282] 1 containers: [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03]
	I1018 09:46:19.367329  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:19.371389  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:46:19.371461  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:46:19.398889  353123 cri.go:89] found id: ""
	I1018 09:46:19.398912  353123 logs.go:282] 0 containers: []
	W1018 09:46:19.398920  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:46:19.398926  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:46:19.398972  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:46:19.427300  353123 cri.go:89] found id: ""
	I1018 09:46:19.427325  353123 logs.go:282] 0 containers: []
	W1018 09:46:19.427333  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:46:19.427339  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:46:19.427393  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:46:19.461809  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:19.461870  353123 cri.go:89] found id: ""
	I1018 09:46:19.461881  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:46:19.461944  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:19.466794  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:46:19.466922  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:46:19.499692  353123 cri.go:89] found id: ""
	I1018 09:46:19.499724  353123 logs.go:282] 0 containers: []
	W1018 09:46:19.499734  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:46:19.499742  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:46:19.499811  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:46:19.533000  353123 cri.go:89] found id: "b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:19.533034  353123 cri.go:89] found id: ""
	I1018 09:46:19.533044  353123 logs.go:282] 1 containers: [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88]
	I1018 09:46:19.533105  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:19.537861  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:46:19.537927  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:46:19.571331  353123 cri.go:89] found id: ""
	I1018 09:46:19.571363  353123 logs.go:282] 0 containers: []
	W1018 09:46:19.571374  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:46:19.571382  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:46:19.571554  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:46:19.606532  353123 cri.go:89] found id: ""
	I1018 09:46:19.606559  353123 logs.go:282] 0 containers: []
	W1018 09:46:19.606592  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:46:19.606607  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:46:19.606629  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:46:19.633762  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:46:19.633798  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:46:19.708874  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:46:19.708927  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:46:19.708953  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:19.751934  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:46:19.751981  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:19.830202  353123 logs.go:123] Gathering logs for kube-controller-manager [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88] ...
	I1018 09:46:19.830241  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:19.864189  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:46:19.864224  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:46:19.933962  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:46:19.934003  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:46:19.972552  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:46:19.972587  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:46:22.615982  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:46:22.616402  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:46:22.616450  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:46:22.616497  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:46:22.644952  353123 cri.go:89] found id: "19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:22.644973  353123 cri.go:89] found id: ""
	I1018 09:46:22.644983  353123 logs.go:282] 1 containers: [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03]
	I1018 09:46:22.645046  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:22.648933  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:46:22.648989  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:46:22.675258  353123 cri.go:89] found id: ""
	I1018 09:46:22.675283  353123 logs.go:282] 0 containers: []
	W1018 09:46:22.675291  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:46:22.675296  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:46:22.675348  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:46:22.702848  353123 cri.go:89] found id: ""
	I1018 09:46:22.702874  353123 logs.go:282] 0 containers: []
	W1018 09:46:22.702883  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:46:22.702889  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:46:22.702945  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:46:22.728726  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:22.728749  353123 cri.go:89] found id: ""
	I1018 09:46:22.728758  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:46:22.728842  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:22.732815  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:46:22.732914  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:46:22.759136  353123 cri.go:89] found id: ""
	I1018 09:46:22.759167  353123 logs.go:282] 0 containers: []
	W1018 09:46:22.759179  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:46:22.759186  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:46:22.759243  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:46:22.788122  353123 cri.go:89] found id: "b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:22.788149  353123 cri.go:89] found id: ""
	I1018 09:46:22.788162  353123 logs.go:282] 1 containers: [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88]
	I1018 09:46:22.788228  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:22.792603  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:46:22.792674  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:46:22.825327  353123 cri.go:89] found id: ""
	I1018 09:46:22.825356  353123 logs.go:282] 0 containers: []
	W1018 09:46:22.825367  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:46:22.825375  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:46:22.825435  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:46:22.857206  353123 cri.go:89] found id: ""
	I1018 09:46:22.857247  353123 logs.go:282] 0 containers: []
	W1018 09:46:22.857260  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:46:22.857273  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:46:22.857289  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:46:22.955843  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:46:22.955877  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:46:22.975457  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:46:22.975492  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:46:23.033992  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:46:23.034017  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:46:23.034030  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:23.074467  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:46:23.074514  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:23.149266  353123 logs.go:123] Gathering logs for kube-controller-manager [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88] ...
	I1018 09:46:23.149294  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:23.179447  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:46:23.179481  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1018 09:46:23.240272  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:46:23.240301  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:46:23.409114  399440 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:46:23.409143  399440 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1018 09:46:23.409226  399440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-345705
	I1018 09:46:23.412566  399440 addons.go:238] Setting addon default-storageclass=true in "auto-345705"
	I1018 09:46:23.412619  399440 host.go:66] Checking if "auto-345705" exists ...
	I1018 09:46:23.413160  399440 cli_runner.go:164] Run: docker container inspect auto-345705 --format={{.State.Status}}
	I1018 09:46:23.443838  399440 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1018 09:46:23.443866  399440 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1018 09:46:23.443949  399440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-345705
	I1018 09:46:23.445053  399440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33228 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/auto-345705/id_rsa Username:docker}
	I1018 09:46:23.471917  399440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33228 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/auto-345705/id_rsa Username:docker}
	I1018 09:46:23.480917  399440 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1018 09:46:23.544671  399440 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1018 09:46:23.563415  399440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1018 09:46:23.604974  399440 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1018 09:46:23.682379  399440 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1018 09:46:23.684443  399440 node_ready.go:35] waiting up to 15m0s for node "auto-345705" to be "Ready" ...
	I1018 09:46:23.880802  399440 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1018 09:46:23.881875  399440 addons.go:514] duration metric: took 506.580053ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1018 09:46:24.186988  399440 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-345705" context rescaled to 1 replicas
	W1018 09:46:25.687371  399440 node_ready.go:57] node "auto-345705" has "Ready":"False" status (will retry)
	W1018 09:46:24.310942  400675 pod_ready.go:104] pod "coredns-66bc5c9577-g6bf9" is not "Ready", error: <nil>
	W1018 09:46:26.811119  400675 pod_ready.go:104] pod "coredns-66bc5c9577-g6bf9" is not "Ready", error: <nil>
	I1018 09:46:25.770898  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:46:25.771256  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:46:25.771300  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1018 09:46:25.771348  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1018 09:46:25.798785  353123 cri.go:89] found id: "19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:25.798811  353123 cri.go:89] found id: ""
	I1018 09:46:25.798842  353123 logs.go:282] 1 containers: [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03]
	I1018 09:46:25.798904  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:25.802848  353123 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1018 09:46:25.802932  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1018 09:46:25.829891  353123 cri.go:89] found id: ""
	I1018 09:46:25.829915  353123 logs.go:282] 0 containers: []
	W1018 09:46:25.829923  353123 logs.go:284] No container was found matching "etcd"
	I1018 09:46:25.829929  353123 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1018 09:46:25.829981  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1018 09:46:25.856187  353123 cri.go:89] found id: ""
	I1018 09:46:25.856215  353123 logs.go:282] 0 containers: []
	W1018 09:46:25.856223  353123 logs.go:284] No container was found matching "coredns"
	I1018 09:46:25.856229  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1018 09:46:25.856282  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1018 09:46:25.883378  353123 cri.go:89] found id: "7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:25.883406  353123 cri.go:89] found id: ""
	I1018 09:46:25.883416  353123 logs.go:282] 1 containers: [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9]
	I1018 09:46:25.883476  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:25.887419  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1018 09:46:25.887480  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1018 09:46:25.913889  353123 cri.go:89] found id: ""
	I1018 09:46:25.913922  353123 logs.go:282] 0 containers: []
	W1018 09:46:25.913934  353123 logs.go:284] No container was found matching "kube-proxy"
	I1018 09:46:25.913943  353123 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1018 09:46:25.913998  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1018 09:46:25.940407  353123 cri.go:89] found id: "b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:25.940433  353123 cri.go:89] found id: ""
	I1018 09:46:25.940444  353123 logs.go:282] 1 containers: [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88]
	I1018 09:46:25.940493  353123 ssh_runner.go:195] Run: which crictl
	I1018 09:46:25.944347  353123 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1018 09:46:25.944406  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1018 09:46:25.970599  353123 cri.go:89] found id: ""
	I1018 09:46:25.970623  353123 logs.go:282] 0 containers: []
	W1018 09:46:25.970631  353123 logs.go:284] No container was found matching "kindnet"
	I1018 09:46:25.970639  353123 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1018 09:46:25.970683  353123 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1018 09:46:25.995685  353123 cri.go:89] found id: ""
	I1018 09:46:25.995713  353123 logs.go:282] 0 containers: []
	W1018 09:46:25.995721  353123 logs.go:284] No container was found matching "storage-provisioner"
	I1018 09:46:25.995730  353123 logs.go:123] Gathering logs for container status ...
	I1018 09:46:25.995746  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1018 09:46:26.024904  353123 logs.go:123] Gathering logs for kubelet ...
	I1018 09:46:26.024930  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1018 09:46:26.119921  353123 logs.go:123] Gathering logs for dmesg ...
	I1018 09:46:26.119956  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1018 09:46:26.138968  353123 logs.go:123] Gathering logs for describe nodes ...
	I1018 09:46:26.138994  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1018 09:46:26.198070  353123 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1018 09:46:26.198091  353123 logs.go:123] Gathering logs for kube-apiserver [19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03] ...
	I1018 09:46:26.198107  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 19c378ffdc378f4602fc813104f97ae99d6de35c5d012d04acaa1e737f4f2f03"
	I1018 09:46:26.230982  353123 logs.go:123] Gathering logs for kube-scheduler [7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9] ...
	I1018 09:46:26.231011  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7a696f33b6d68ef7d0ef8dfbb27e9f5b3b558c14933cead42d560aee1c21b1e9"
	I1018 09:46:26.286450  353123 logs.go:123] Gathering logs for kube-controller-manager [b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88] ...
	I1018 09:46:26.286483  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 b60197dc6a8857d423c70c1bf7a86b81c4c19218a9a4b9522558c6f024002f88"
	I1018 09:46:26.314502  353123 logs.go:123] Gathering logs for CRI-O ...
	I1018 09:46:26.314527  353123 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1018 09:46:27.687669  399440 node_ready.go:57] node "auto-345705" has "Ready":"False" status (will retry)
	W1018 09:46:30.187307  399440 node_ready.go:57] node "auto-345705" has "Ready":"False" status (will retry)
	W1018 09:46:28.811732  400675 pod_ready.go:104] pod "coredns-66bc5c9577-g6bf9" is not "Ready", error: <nil>
	W1018 09:46:30.811793  400675 pod_ready.go:104] pod "coredns-66bc5c9577-g6bf9" is not "Ready", error: <nil>
	I1018 09:46:28.867292  353123 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1018 09:46:28.867741  353123 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1018 09:46:28.867806  353123 kubeadm.go:601] duration metric: took 4m4.294777593s to restartPrimaryControlPlane
	W1018 09:46:28.867925  353123 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1018 09:46:28.867994  353123 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1018 09:46:29.455354  353123 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:46:29.469331  353123 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1018 09:46:29.477395  353123 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1018 09:46:29.477440  353123 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1018 09:46:29.485306  353123 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1018 09:46:29.485322  353123 kubeadm.go:157] found existing configuration files:
	
	I1018 09:46:29.485361  353123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1018 09:46:29.493014  353123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1018 09:46:29.493058  353123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1018 09:46:29.500503  353123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1018 09:46:29.508229  353123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1018 09:46:29.508284  353123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1018 09:46:29.516056  353123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1018 09:46:29.524115  353123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1018 09:46:29.524174  353123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1018 09:46:29.531577  353123 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1018 09:46:29.538903  353123 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1018 09:46:29.538955  353123 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1018 09:46:29.546446  353123 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1018 09:46:29.602458  353123 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1041-gcp\n", err: exit status 1
	I1018 09:46:29.660897  353123 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1018 09:46:32.187866  399440 node_ready.go:57] node "auto-345705" has "Ready":"False" status (will retry)
	I1018 09:46:34.687803  399440 node_ready.go:49] node "auto-345705" is "Ready"
	I1018 09:46:34.687861  399440 node_ready.go:38] duration metric: took 11.003384545s for node "auto-345705" to be "Ready" ...
	I1018 09:46:34.687879  399440 api_server.go:52] waiting for apiserver process to appear ...
	I1018 09:46:34.687938  399440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:46:34.703656  399440 api_server.go:72] duration metric: took 11.328425713s to wait for apiserver process to appear ...
	I1018 09:46:34.703682  399440 api_server.go:88] waiting for apiserver healthz status ...
	I1018 09:46:34.703711  399440 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1018 09:46:34.710286  399440 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1018 09:46:34.711427  399440 api_server.go:141] control plane version: v1.34.1
	I1018 09:46:34.711459  399440 api_server.go:131] duration metric: took 7.768547ms to wait for apiserver health ...
	I1018 09:46:34.711470  399440 system_pods.go:43] waiting for kube-system pods to appear ...
	I1018 09:46:34.715622  399440 system_pods.go:59] 8 kube-system pods found
	I1018 09:46:34.715661  399440 system_pods.go:61] "coredns-66bc5c9577-c45cm" [bbaa4412-a852-4eba-b406-498c505154ab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:46:34.715671  399440 system_pods.go:61] "etcd-auto-345705" [f4ddc303-8f9c-442a-937b-21b9c7c6ba3c] Running
	I1018 09:46:34.715682  399440 system_pods.go:61] "kindnet-8prng" [b0ba24fb-c000-4468-b594-e15ca19d1217] Running
	I1018 09:46:34.715688  399440 system_pods.go:61] "kube-apiserver-auto-345705" [1facf8b9-1396-42aa-9283-9c6f94cbc772] Running
	I1018 09:46:34.715696  399440 system_pods.go:61] "kube-controller-manager-auto-345705" [314f4642-9c74-4be2-a4a5-c86eb54d98af] Running
	I1018 09:46:34.715704  399440 system_pods.go:61] "kube-proxy-t8zkf" [cf612b40-522d-4db6-9dc5-3933b68639c8] Running
	I1018 09:46:34.715711  399440 system_pods.go:61] "kube-scheduler-auto-345705" [5ca46e56-2651-44ba-84bf-78d4b35ec55e] Running
	I1018 09:46:34.715722  399440 system_pods.go:61] "storage-provisioner" [990ba7cf-70bb-48a0-9f36-bea32f9b9c2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:46:34.715733  399440 system_pods.go:74] duration metric: took 4.255974ms to wait for pod list to return data ...
	I1018 09:46:34.715747  399440 default_sa.go:34] waiting for default service account to be created ...
	I1018 09:46:34.718713  399440 default_sa.go:45] found service account: "default"
	I1018 09:46:34.718741  399440 default_sa.go:55] duration metric: took 2.984167ms for default service account to be created ...
	I1018 09:46:34.718754  399440 system_pods.go:116] waiting for k8s-apps to be running ...
	I1018 09:46:34.722650  399440 system_pods.go:86] 8 kube-system pods found
	I1018 09:46:34.722689  399440 system_pods.go:89] "coredns-66bc5c9577-c45cm" [bbaa4412-a852-4eba-b406-498c505154ab] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1018 09:46:34.722698  399440 system_pods.go:89] "etcd-auto-345705" [f4ddc303-8f9c-442a-937b-21b9c7c6ba3c] Running
	I1018 09:46:34.722706  399440 system_pods.go:89] "kindnet-8prng" [b0ba24fb-c000-4468-b594-e15ca19d1217] Running
	I1018 09:46:34.722711  399440 system_pods.go:89] "kube-apiserver-auto-345705" [1facf8b9-1396-42aa-9283-9c6f94cbc772] Running
	I1018 09:46:34.722718  399440 system_pods.go:89] "kube-controller-manager-auto-345705" [314f4642-9c74-4be2-a4a5-c86eb54d98af] Running
	I1018 09:46:34.722731  399440 system_pods.go:89] "kube-proxy-t8zkf" [cf612b40-522d-4db6-9dc5-3933b68639c8] Running
	I1018 09:46:34.722734  399440 system_pods.go:89] "kube-scheduler-auto-345705" [5ca46e56-2651-44ba-84bf-78d4b35ec55e] Running
	I1018 09:46:34.722752  399440 system_pods.go:89] "storage-provisioner" [990ba7cf-70bb-48a0-9f36-bea32f9b9c2f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1018 09:46:34.722779  399440 retry.go:31] will retry after 256.118538ms: missing components: kube-dns
	I1018 09:46:34.983013  399440 system_pods.go:86] 8 kube-system pods found
	I1018 09:46:34.983047  399440 system_pods.go:89] "coredns-66bc5c9577-c45cm" [bbaa4412-a852-4eba-b406-498c505154ab] Running
	I1018 09:46:34.983055  399440 system_pods.go:89] "etcd-auto-345705" [f4ddc303-8f9c-442a-937b-21b9c7c6ba3c] Running
	I1018 09:46:34.983062  399440 system_pods.go:89] "kindnet-8prng" [b0ba24fb-c000-4468-b594-e15ca19d1217] Running
	I1018 09:46:34.983067  399440 system_pods.go:89] "kube-apiserver-auto-345705" [1facf8b9-1396-42aa-9283-9c6f94cbc772] Running
	I1018 09:46:34.983080  399440 system_pods.go:89] "kube-controller-manager-auto-345705" [314f4642-9c74-4be2-a4a5-c86eb54d98af] Running
	I1018 09:46:34.983092  399440 system_pods.go:89] "kube-proxy-t8zkf" [cf612b40-522d-4db6-9dc5-3933b68639c8] Running
	I1018 09:46:34.983097  399440 system_pods.go:89] "kube-scheduler-auto-345705" [5ca46e56-2651-44ba-84bf-78d4b35ec55e] Running
	I1018 09:46:34.983102  399440 system_pods.go:89] "storage-provisioner" [990ba7cf-70bb-48a0-9f36-bea32f9b9c2f] Running
	I1018 09:46:34.983112  399440 system_pods.go:126] duration metric: took 264.351088ms to wait for k8s-apps to be running ...
	I1018 09:46:34.983121  399440 system_svc.go:44] waiting for kubelet service to be running ....
	I1018 09:46:34.983175  399440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:46:34.998494  399440 system_svc.go:56] duration metric: took 15.363822ms WaitForService to wait for kubelet
	I1018 09:46:34.998521  399440 kubeadm.go:586] duration metric: took 11.623297804s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:46:34.998541  399440 node_conditions.go:102] verifying NodePressure condition ...
	I1018 09:46:35.001759  399440 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1018 09:46:35.001790  399440 node_conditions.go:123] node cpu capacity is 8
	I1018 09:46:35.001844  399440 node_conditions.go:105] duration metric: took 3.294567ms to run NodePressure ...
	I1018 09:46:35.001860  399440 start.go:241] waiting for startup goroutines ...
	I1018 09:46:35.001876  399440 start.go:246] waiting for cluster config update ...
	I1018 09:46:35.001890  399440 start.go:255] writing updated cluster config ...
	I1018 09:46:35.002159  399440 ssh_runner.go:195] Run: rm -f paused
	I1018 09:46:35.006501  399440 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:46:35.010683  399440 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c45cm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:35.015112  399440 pod_ready.go:94] pod "coredns-66bc5c9577-c45cm" is "Ready"
	I1018 09:46:35.015156  399440 pod_ready.go:86] duration metric: took 4.424739ms for pod "coredns-66bc5c9577-c45cm" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:35.017264  399440 pod_ready.go:83] waiting for pod "etcd-auto-345705" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:35.021464  399440 pod_ready.go:94] pod "etcd-auto-345705" is "Ready"
	I1018 09:46:35.021488  399440 pod_ready.go:86] duration metric: took 4.202831ms for pod "etcd-auto-345705" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:35.023669  399440 pod_ready.go:83] waiting for pod "kube-apiserver-auto-345705" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:35.027442  399440 pod_ready.go:94] pod "kube-apiserver-auto-345705" is "Ready"
	I1018 09:46:35.027466  399440 pod_ready.go:86] duration metric: took 3.77532ms for pod "kube-apiserver-auto-345705" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:35.029392  399440 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-345705" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:35.411161  399440 pod_ready.go:94] pod "kube-controller-manager-auto-345705" is "Ready"
	I1018 09:46:35.411184  399440 pod_ready.go:86] duration metric: took 381.770694ms for pod "kube-controller-manager-auto-345705" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:35.611330  399440 pod_ready.go:83] waiting for pod "kube-proxy-t8zkf" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:36.010811  399440 pod_ready.go:94] pod "kube-proxy-t8zkf" is "Ready"
	I1018 09:46:36.010877  399440 pod_ready.go:86] duration metric: took 399.522073ms for pod "kube-proxy-t8zkf" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:36.211119  399440 pod_ready.go:83] waiting for pod "kube-scheduler-auto-345705" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:36.612246  399440 pod_ready.go:94] pod "kube-scheduler-auto-345705" is "Ready"
	I1018 09:46:36.612271  399440 pod_ready.go:86] duration metric: took 401.128494ms for pod "kube-scheduler-auto-345705" in "kube-system" namespace to be "Ready" or be gone ...
	I1018 09:46:36.612283  399440 pod_ready.go:40] duration metric: took 1.605752221s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1018 09:46:36.664769  399440 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1018 09:46:36.666629  399440 out.go:179] * Done! kubectl is now configured to use "auto-345705" cluster and "default" namespace by default
	I1018 09:46:36.689809  353123 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:46:36.690013  353123 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:46:36.690190  353123 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:46:36.690324  353123 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:46:36.690365  353123 kubeadm.go:318] OS: Linux
	I1018 09:46:36.690499  353123 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:46:36.690585  353123 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:46:36.690714  353123 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:46:36.690787  353123 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:46:36.690877  353123 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:46:36.690945  353123 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:46:36.691030  353123 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:46:36.691105  353123 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 09:46:36.691201  353123 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:46:36.691322  353123 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:46:36.691460  353123 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:46:36.691995  353123 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:46:36.693716  353123 out.go:252]   - Generating certificates and keys ...
	I1018 09:46:36.693812  353123 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:46:36.693939  353123 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:46:36.694055  353123 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1018 09:46:36.694151  353123 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1018 09:46:36.694252  353123 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1018 09:46:36.694329  353123 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1018 09:46:36.694414  353123 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1018 09:46:36.694501  353123 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1018 09:46:36.694611  353123 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1018 09:46:36.694707  353123 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1018 09:46:36.694763  353123 kubeadm.go:318] [certs] Using the existing "sa" key
	I1018 09:46:36.694859  353123 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:46:36.694955  353123 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:46:36.695035  353123 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:46:36.695119  353123 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:46:36.695231  353123 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:46:36.695325  353123 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:46:36.695431  353123 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:46:36.695524  353123 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:46:36.696886  353123 out.go:252]   - Booting up control plane ...
	I1018 09:46:36.696986  353123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:46:36.697108  353123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:46:36.697198  353123 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:46:36.697320  353123 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:46:36.697464  353123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:46:36.697616  353123 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:46:36.697733  353123 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:46:36.697780  353123 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:46:36.697957  353123 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:46:36.698069  353123 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:46:36.698141  353123 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.733939ms
	I1018 09:46:36.698279  353123 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:46:36.698393  353123 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 09:46:36.698540  353123 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:46:36.698667  353123 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:46:36.698776  353123 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.54595951s
	I1018 09:46:36.698897  353123 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.963835894s
	I1018 09:46:36.698976  353123 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501830709s
	I1018 09:46:36.699090  353123 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:46:36.699272  353123 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:46:36.699361  353123 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:46:36.699587  353123 kubeadm.go:318] [mark-control-plane] Marking the node kubernetes-upgrade-919613 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:46:36.699652  353123 kubeadm.go:318] [bootstrap-token] Using token: bf4nlf.qokkooo3bttxjsir
	I1018 09:46:36.701034  353123 out.go:252]   - Configuring RBAC rules ...
	I1018 09:46:36.701175  353123 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:46:36.701284  353123 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:46:36.701476  353123 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:46:36.701681  353123 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:46:36.701789  353123 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:46:36.701903  353123 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:46:36.702064  353123 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:46:36.702135  353123 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:46:36.702217  353123 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:46:36.702233  353123 kubeadm.go:318] 
	I1018 09:46:36.702318  353123 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:46:36.702327  353123 kubeadm.go:318] 
	I1018 09:46:36.702399  353123 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:46:36.702407  353123 kubeadm.go:318] 
	I1018 09:46:36.702427  353123 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:46:36.702492  353123 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:46:36.702544  353123 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:46:36.702552  353123 kubeadm.go:318] 
	I1018 09:46:36.702621  353123 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:46:36.702630  353123 kubeadm.go:318] 
	I1018 09:46:36.702695  353123 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:46:36.702704  353123 kubeadm.go:318] 
	I1018 09:46:36.702773  353123 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:46:36.702905  353123 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:46:36.702965  353123 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:46:36.702972  353123 kubeadm.go:318] 
	I1018 09:46:36.703037  353123 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:46:36.703098  353123 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:46:36.703104  353123 kubeadm.go:318] 
	I1018 09:46:36.703168  353123 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token bf4nlf.qokkooo3bttxjsir \
	I1018 09:46:36.703278  353123 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f \
	I1018 09:46:36.703318  353123 kubeadm.go:318] 	--control-plane 
	I1018 09:46:36.703325  353123 kubeadm.go:318] 
	I1018 09:46:36.703429  353123 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:46:36.703440  353123 kubeadm.go:318] 
	I1018 09:46:36.703563  353123 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token bf4nlf.qokkooo3bttxjsir \
	I1018 09:46:36.703727  353123 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f 
	I1018 09:46:36.703745  353123 cni.go:84] Creating CNI manager for ""
	I1018 09:46:36.703757  353123 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:46:36.705288  353123 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Oct 18 09:46:06 embed-certs-055175 crio[569]: time="2025-10-18T09:46:06.739813749Z" level=info msg="Started container" PID=1754 containerID=8a57209e7385ed384a05fdc6e608d391c28485e1e585900dcef5d84f29ead062 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729/dashboard-metrics-scraper id=4dfe9b46-151d-463e-bf19-4bf841a83e06 name=/runtime.v1.RuntimeService/StartContainer sandboxID=15c5c582ed677d3f977eb7b8a16c11dd5c0376df50082d8ab46f87d7831029e2
	Oct 18 09:46:06 embed-certs-055175 crio[569]: time="2025-10-18T09:46:06.793445343Z" level=info msg="Removing container: 5c29590fffc97e54ce536272251223b4ccf35091afdddc9849d7773b0772abed" id=5cbf729b-6eeb-4818-a5c8-f86e827a5c22 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:46:06 embed-certs-055175 crio[569]: time="2025-10-18T09:46:06.803709463Z" level=info msg="Removed container 5c29590fffc97e54ce536272251223b4ccf35091afdddc9849d7773b0772abed: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729/dashboard-metrics-scraper" id=5cbf729b-6eeb-4818-a5c8-f86e827a5c22 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.811488494Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9bf357b8-5e92-42ed-b626-2ac6d084fe99 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.812474089Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8c3e0980-28e2-46fd-bb26-58cc56678b92 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.814314085Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=dd152ca9-bbaf-4d62-b335-150c428e0dd3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.814578574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.819407994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.819607854Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9968955a7e6b954be5a30a0fd1a682309cc5a343a80e7b976a730ad96b139a71/merged/etc/passwd: no such file or directory"
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.81963995Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9968955a7e6b954be5a30a0fd1a682309cc5a343a80e7b976a730ad96b139a71/merged/etc/group: no such file or directory"
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.819918441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.852770092Z" level=info msg="Created container eb19b9973edd2e5885c0817788667f192b9bccac8eb33da85d7ba2b3696335af: kube-system/storage-provisioner/storage-provisioner" id=dd152ca9-bbaf-4d62-b335-150c428e0dd3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.853613248Z" level=info msg="Starting container: eb19b9973edd2e5885c0817788667f192b9bccac8eb33da85d7ba2b3696335af" id=6e810bc2-c1ac-4b21-a12f-c8972e1a87cd name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.856187215Z" level=info msg="Started container" PID=1768 containerID=eb19b9973edd2e5885c0817788667f192b9bccac8eb33da85d7ba2b3696335af description=kube-system/storage-provisioner/storage-provisioner id=6e810bc2-c1ac-4b21-a12f-c8972e1a87cd name=/runtime.v1.RuntimeService/StartContainer sandboxID=5a8eaca320a2f3cf0b8a43acc2f6bac8b5e7ebcc1f500cc21bab0220d5907455
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.679761584Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=033e421a-2f1a-4f2c-826f-4258a9b4728e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.68078595Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0c9c81d5-3c67-4d3a-9d6d-03f29bbf394b name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.682013153Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729/dashboard-metrics-scraper" id=8bee58b6-67af-4309-a895-dcdf380dbd3b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.682279481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.688541666Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.689109429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.718541431Z" level=info msg="Created container d47777b68d3c4c1e6a384ff338c901429116f30e9a77d8534cdbdade36f15a3c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729/dashboard-metrics-scraper" id=8bee58b6-67af-4309-a895-dcdf380dbd3b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.719234065Z" level=info msg="Starting container: d47777b68d3c4c1e6a384ff338c901429116f30e9a77d8534cdbdade36f15a3c" id=0fd36466-333f-4de8-8567-2df5b4d243d4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.72127391Z" level=info msg="Started container" PID=1804 containerID=d47777b68d3c4c1e6a384ff338c901429116f30e9a77d8534cdbdade36f15a3c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729/dashboard-metrics-scraper id=0fd36466-333f-4de8-8567-2df5b4d243d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=15c5c582ed677d3f977eb7b8a16c11dd5c0376df50082d8ab46f87d7831029e2
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.866545412Z" level=info msg="Removing container: 8a57209e7385ed384a05fdc6e608d391c28485e1e585900dcef5d84f29ead062" id=e20ed6a9-98a9-4c56-918b-970de2c95b1b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.87638922Z" level=info msg="Removed container 8a57209e7385ed384a05fdc6e608d391c28485e1e585900dcef5d84f29ead062: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729/dashboard-metrics-scraper" id=e20ed6a9-98a9-4c56-918b-970de2c95b1b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d47777b68d3c4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   15c5c582ed677       dashboard-metrics-scraper-6ffb444bf9-l9729   kubernetes-dashboard
	eb19b9973edd2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   5a8eaca320a2f       storage-provisioner                          kube-system
	492f754e3064c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   40376facc707b       kubernetes-dashboard-855c9754f9-5ddr7        kubernetes-dashboard
	deb20ecd58a6f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   5836c3ce58123       busybox                                      default
	7532c8b9596c0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   7268dcec2d1fb       coredns-66bc5c9577-ksdf9                     kube-system
	fca12026bf0b1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   3e8d4d152a834       kindnet-tntfx                                kube-system
	cfb7f8b3c954a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   5a8eaca320a2f       storage-provisioner                          kube-system
	18b9b557a1a00       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   480841c7ca9a0       kube-proxy-9n98q                             kube-system
	82544c7a6f005       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   3d26a36ca5806       kube-scheduler-embed-certs-055175            kube-system
	d8a28c141ac16       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   b85b22c0c63ee       etcd-embed-certs-055175                      kube-system
	f427b03ed9ce5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   1d59202d0504e       kube-apiserver-embed-certs-055175            kube-system
	0aa95bb2edd15       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   9af16a1156df6       kube-controller-manager-embed-certs-055175   kube-system
	
	
	==> coredns [7532c8b9596c037e46d007cefb401457054eec5cbee4a52ea325b5f3828bb3f9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45054 - 63898 "HINFO IN 5170228139375379588.4831894331947021424. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.05520709s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-055175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-055175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=embed-certs-055175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_44_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:44:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-055175
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:46:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:46:11 +0000   Sat, 18 Oct 2025 09:44:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:46:11 +0000   Sat, 18 Oct 2025 09:44:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:46:11 +0000   Sat, 18 Oct 2025 09:44:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:46:11 +0000   Sat, 18 Oct 2025 09:45:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-055175
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                a753bc03-5449-4387-b526-2cbb885beb79
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-ksdf9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-embed-certs-055175                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-tntfx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-055175             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-embed-certs-055175    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-9n98q                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-055175             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-l9729    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5ddr7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node embed-certs-055175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node embed-certs-055175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node embed-certs-055175 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node embed-certs-055175 event: Registered Node embed-certs-055175 in Controller
	  Normal  NodeReady                97s                kubelet          Node embed-certs-055175 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node embed-certs-055175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node embed-certs-055175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node embed-certs-055175 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node embed-certs-055175 event: Registered Node embed-certs-055175 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [d8a28c141ac160edf272ee9ddf9b12c1548c335130b27e90935ec06e6a60642f] <==
	{"level":"warn","ts":"2025-10-18T09:45:40.195652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.202210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.210993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.218708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.227447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.235630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.241977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.250635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.257121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.263590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.270900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.277237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.284881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.292225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.298910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.306396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.313564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.326027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.329748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.345590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.405673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:01.576704Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.2517ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356040983161112 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-kxikw4ohcsavdzb77m6qwlog7y\" mod_revision:577 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-kxikw4ohcsavdzb77m6qwlog7y\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-kxikw4ohcsavdzb77m6qwlog7y\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T09:46:01.576818Z","caller":"traceutil/trace.go:172","msg":"trace[1910276683] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"178.962172ms","start":"2025-10-18T09:46:01.397839Z","end":"2025-10-18T09:46:01.576801Z","steps":["trace[1910276683] 'process raft request'  (duration: 53.050253ms)","trace[1910276683] 'compare'  (duration: 125.121379ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:46:01.590784Z","caller":"traceutil/trace.go:172","msg":"trace[2078134268] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"167.678962ms","start":"2025-10-18T09:46:01.423086Z","end":"2025-10-18T09:46:01.590765Z","steps":["trace[2078134268] 'process raft request'  (duration: 167.561806ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:46:01.717527Z","caller":"traceutil/trace.go:172","msg":"trace[499519936] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"257.47078ms","start":"2025-10-18T09:46:01.460025Z","end":"2025-10-18T09:46:01.717496Z","steps":["trace[499519936] 'process raft request'  (duration: 246.427258ms)","trace[499519936] 'compare'  (duration: 10.794075ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:46:38 up  1:29,  0 user,  load average: 4.82, 3.24, 2.07
	Linux embed-certs-055175 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fca12026bf0b1ba5900afb94e683550d1e47af8a207f77fcb266172b3322547a] <==
	I1018 09:45:42.289015       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:45:42.289297       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:45:42.289447       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:45:42.289465       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:45:42.289478       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:45:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:45:42.492436       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:45:42.492478       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:45:42.492508       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:45:42.492641       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:45:42.988958       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:45:42.988996       1 metrics.go:72] Registering metrics
	I1018 09:45:42.989118       1 controller.go:711] "Syncing nftables rules"
	I1018 09:45:52.492993       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:45:52.493062       1 main.go:301] handling current node
	I1018 09:46:02.492899       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:46:02.492968       1 main.go:301] handling current node
	I1018 09:46:12.493328       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:46:12.493368       1 main.go:301] handling current node
	I1018 09:46:22.492894       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:46:22.492970       1 main.go:301] handling current node
	I1018 09:46:32.492984       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:46:32.493021       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f427b03ed9ce5db5e74d4069d94ff8948e7da816ad588e3c12fbdd22293cab0d] <==
	I1018 09:45:40.978941       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:45:40.978951       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:45:40.978958       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:45:40.978964       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:45:40.974772       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:45:40.974362       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 09:45:40.979924       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:45:40.981071       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 09:45:40.985840       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:45:40.989517       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:45:40.989547       1 policy_source.go:240] refreshing policies
	I1018 09:45:40.994399       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:45:41.004343       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:45:41.077950       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:45:41.319030       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:45:41.347778       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:45:41.369242       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:45:41.380490       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:45:41.387761       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:45:41.422930       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.38.12"}
	I1018 09:45:41.442056       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.62.121"}
	I1018 09:45:41.877915       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:45:43.651995       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:45:44.045390       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:45:44.197558       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0aa95bb2edd15a27792b1a5bd689c87bd9d233036c37d823aff945a5fee1dc2d] <==
	I1018 09:45:43.640624       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:45:43.640918       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-055175"
	I1018 09:45:43.640986       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 09:45:43.641055       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:45:43.640728       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:45:43.641558       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 09:45:43.641707       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:45:43.642249       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:45:43.642387       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:45:43.642534       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:45:43.642570       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:45:43.643286       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:45:43.644860       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:45:43.645389       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:45:43.647358       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:45:43.650129       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:45:43.651302       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:45:43.651413       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 09:45:43.653682       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:45:43.655685       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:45:43.658009       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:45:43.660589       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:45:43.668879       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:45:43.681042       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:45:43.691029       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [18b9b557a1a00e9e1345cbaf906acf8f76759deaa3ffbb5c5956d703f09a134d] <==
	I1018 09:45:42.101039       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:45:42.167571       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:45:42.268714       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:45:42.268745       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:45:42.268836       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:45:42.291866       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:45:42.291941       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:45:42.298046       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:45:42.298480       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:45:42.298516       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:45:42.302465       1 config.go:200] "Starting service config controller"
	I1018 09:45:42.302487       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:45:42.302501       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:45:42.302507       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:45:42.302527       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:45:42.302532       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:45:42.302785       1 config.go:309] "Starting node config controller"
	I1018 09:45:42.302798       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:45:42.403250       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:45:42.403261       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:45:42.403300       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:45:42.403285       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [82544c7a6f005e8210b717c0d6b26b29c0f08fbb6fdae3002ce9107b8c924a75] <==
	I1018 09:45:39.797865       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:45:40.897020       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:45:40.897060       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:45:40.897072       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:45:40.897081       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:45:40.959804       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:45:40.959947       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:45:40.964487       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:45:40.964572       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:45:40.965527       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:45:40.965601       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:45:41.065567       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:45:47 embed-certs-055175 kubelet[728]: I1018 09:45:47.735150     728 scope.go:117] "RemoveContainer" containerID="27c525c795f2123be01ce577d6b1f6a8fab55c25719657873e31cd200093572a"
	Oct 18 09:45:48 embed-certs-055175 kubelet[728]: I1018 09:45:48.741131     728 scope.go:117] "RemoveContainer" containerID="27c525c795f2123be01ce577d6b1f6a8fab55c25719657873e31cd200093572a"
	Oct 18 09:45:48 embed-certs-055175 kubelet[728]: I1018 09:45:48.741316     728 scope.go:117] "RemoveContainer" containerID="5c29590fffc97e54ce536272251223b4ccf35091afdddc9849d7773b0772abed"
	Oct 18 09:45:48 embed-certs-055175 kubelet[728]: E1018 09:45:48.741586     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l9729_kubernetes-dashboard(46196800-7b0f-4c69-9e40-79dee66580a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729" podUID="46196800-7b0f-4c69-9e40-79dee66580a9"
	Oct 18 09:45:49 embed-certs-055175 kubelet[728]: I1018 09:45:49.745605     728 scope.go:117] "RemoveContainer" containerID="5c29590fffc97e54ce536272251223b4ccf35091afdddc9849d7773b0772abed"
	Oct 18 09:45:49 embed-certs-055175 kubelet[728]: E1018 09:45:49.745800     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l9729_kubernetes-dashboard(46196800-7b0f-4c69-9e40-79dee66580a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729" podUID="46196800-7b0f-4c69-9e40-79dee66580a9"
	Oct 18 09:45:51 embed-certs-055175 kubelet[728]: I1018 09:45:51.187281     728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 09:45:53 embed-certs-055175 kubelet[728]: I1018 09:45:53.186934     728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5ddr7" podStartSLOduration=2.351417981 podStartE2EDuration="9.186913519s" podCreationTimestamp="2025-10-18 09:45:44 +0000 UTC" firstStartedPulling="2025-10-18 09:45:44.460197125 +0000 UTC m=+5.866402172" lastFinishedPulling="2025-10-18 09:45:51.295692678 +0000 UTC m=+12.701897710" observedRunningTime="2025-10-18 09:45:51.764238925 +0000 UTC m=+13.170443977" watchObservedRunningTime="2025-10-18 09:45:53.186913519 +0000 UTC m=+14.593118567"
	Oct 18 09:45:55 embed-certs-055175 kubelet[728]: I1018 09:45:55.656424     728 scope.go:117] "RemoveContainer" containerID="5c29590fffc97e54ce536272251223b4ccf35091afdddc9849d7773b0772abed"
	Oct 18 09:45:55 embed-certs-055175 kubelet[728]: E1018 09:45:55.656653     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l9729_kubernetes-dashboard(46196800-7b0f-4c69-9e40-79dee66580a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729" podUID="46196800-7b0f-4c69-9e40-79dee66580a9"
	Oct 18 09:46:06 embed-certs-055175 kubelet[728]: I1018 09:46:06.679626     728 scope.go:117] "RemoveContainer" containerID="5c29590fffc97e54ce536272251223b4ccf35091afdddc9849d7773b0772abed"
	Oct 18 09:46:06 embed-certs-055175 kubelet[728]: I1018 09:46:06.792097     728 scope.go:117] "RemoveContainer" containerID="5c29590fffc97e54ce536272251223b4ccf35091afdddc9849d7773b0772abed"
	Oct 18 09:46:06 embed-certs-055175 kubelet[728]: I1018 09:46:06.792345     728 scope.go:117] "RemoveContainer" containerID="8a57209e7385ed384a05fdc6e608d391c28485e1e585900dcef5d84f29ead062"
	Oct 18 09:46:06 embed-certs-055175 kubelet[728]: E1018 09:46:06.792578     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l9729_kubernetes-dashboard(46196800-7b0f-4c69-9e40-79dee66580a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729" podUID="46196800-7b0f-4c69-9e40-79dee66580a9"
	Oct 18 09:46:12 embed-certs-055175 kubelet[728]: I1018 09:46:12.811086     728 scope.go:117] "RemoveContainer" containerID="cfb7f8b3c954a45f3592ad652b5d22760faec41160d64c4ca5ea64b499628f20"
	Oct 18 09:46:15 embed-certs-055175 kubelet[728]: I1018 09:46:15.655714     728 scope.go:117] "RemoveContainer" containerID="8a57209e7385ed384a05fdc6e608d391c28485e1e585900dcef5d84f29ead062"
	Oct 18 09:46:15 embed-certs-055175 kubelet[728]: E1018 09:46:15.655973     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l9729_kubernetes-dashboard(46196800-7b0f-4c69-9e40-79dee66580a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729" podUID="46196800-7b0f-4c69-9e40-79dee66580a9"
	Oct 18 09:46:30 embed-certs-055175 kubelet[728]: I1018 09:46:30.679258     728 scope.go:117] "RemoveContainer" containerID="8a57209e7385ed384a05fdc6e608d391c28485e1e585900dcef5d84f29ead062"
	Oct 18 09:46:30 embed-certs-055175 kubelet[728]: I1018 09:46:30.865237     728 scope.go:117] "RemoveContainer" containerID="8a57209e7385ed384a05fdc6e608d391c28485e1e585900dcef5d84f29ead062"
	Oct 18 09:46:30 embed-certs-055175 kubelet[728]: I1018 09:46:30.865489     728 scope.go:117] "RemoveContainer" containerID="d47777b68d3c4c1e6a384ff338c901429116f30e9a77d8534cdbdade36f15a3c"
	Oct 18 09:46:30 embed-certs-055175 kubelet[728]: E1018 09:46:30.865676     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l9729_kubernetes-dashboard(46196800-7b0f-4c69-9e40-79dee66580a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729" podUID="46196800-7b0f-4c69-9e40-79dee66580a9"
	Oct 18 09:46:35 embed-certs-055175 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:46:35 embed-certs-055175 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:46:35 embed-certs-055175 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:46:35 embed-certs-055175 systemd[1]: kubelet.service: Consumed 1.830s CPU time.
	
	
	==> kubernetes-dashboard [492f754e3064cb75bcfcd048c637bf1d922ca2b1f7c946df701660dacb55b5b6] <==
	2025/10/18 09:45:51 Using namespace: kubernetes-dashboard
	2025/10/18 09:45:51 Using in-cluster config to connect to apiserver
	2025/10/18 09:45:51 Using secret token for csrf signing
	2025/10/18 09:45:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:45:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:45:51 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:45:51 Generating JWE encryption key
	2025/10/18 09:45:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:45:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:45:51 Initializing JWE encryption key from synchronized object
	2025/10/18 09:45:51 Creating in-cluster Sidecar client
	2025/10/18 09:45:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:45:51 Serving insecurely on HTTP port: 9090
	2025/10/18 09:46:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:45:51 Starting overwatch
	
	
	==> storage-provisioner [cfb7f8b3c954a45f3592ad652b5d22760faec41160d64c4ca5ea64b499628f20] <==
	I1018 09:45:42.066410       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:46:12.069325       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [eb19b9973edd2e5885c0817788667f192b9bccac8eb33da85d7ba2b3696335af] <==
	I1018 09:46:12.870071       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:46:12.878271       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:46:12.878326       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:46:12.880736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:16.336494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:20.597601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:24.196037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:27.283484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:30.319495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:30.332477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:46:30.332687       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:46:30.332772       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2f02da72-07ef-40c7-b357-1999f0a74d4d", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-055175_15be879a-fbaa-444a-9bfc-a30ab4247d8f became leader
	I1018 09:46:30.332860       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-055175_15be879a-fbaa-444a-9bfc-a30ab4247d8f!
	W1018 09:46:30.335498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:30.338369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:46:30.433626       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-055175_15be879a-fbaa-444a-9bfc-a30ab4247d8f!
	W1018 09:46:32.341531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:32.345502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:34.349359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:34.353161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:36.358103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:36.362523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:38.367590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:38.373159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-055175 -n embed-certs-055175
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-055175 -n embed-certs-055175: exit status 2 (362.975814ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-055175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-055175
helpers_test.go:243: (dbg) docker inspect embed-certs-055175:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a",
	        "Created": "2025-10-18T09:44:28.71602918Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 391257,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:45:32.513143488Z",
	            "FinishedAt": "2025-10-18T09:45:31.679063648Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a/hosts",
	        "LogPath": "/var/lib/docker/containers/7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a/7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a-json.log",
	        "Name": "/embed-certs-055175",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-055175:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-055175",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ab18617f15c55144ca52784d872159910c8339260f6cb539a637650c8aa090a",
	                "LowerDir": "/var/lib/docker/overlay2/531bfa693b92ead6b3f8f81dfe6ceee18d64c33ff7b1620bdbea0221492660a0-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/531bfa693b92ead6b3f8f81dfe6ceee18d64c33ff7b1620bdbea0221492660a0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/531bfa693b92ead6b3f8f81dfe6ceee18d64c33ff7b1620bdbea0221492660a0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/531bfa693b92ead6b3f8f81dfe6ceee18d64c33ff7b1620bdbea0221492660a0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-055175",
	                "Source": "/var/lib/docker/volumes/embed-certs-055175/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-055175",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-055175",
	                "name.minikube.sigs.k8s.io": "embed-certs-055175",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "225858e59a759609a218e9917712deaa3f1f149ba559732d2116cd45995f2ca0",
	            "SandboxKey": "/var/run/docker/netns/225858e59a75",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33217"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33218"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33221"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33219"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33220"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-055175": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:e1:38:d9:39:dd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7d2dbeb8dc9f32aa321be9871888fc0b62950b6ca92410878ff116152ea346c2",
	                    "EndpointID": "667a1602f489d243c572ea5e9a80c150cc6dd31df68372768c41606b769130c5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-055175",
	                        "7ab18617f15c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-055175 -n embed-certs-055175
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-055175 -n embed-certs-055175: exit status 2 (340.160542ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-055175 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-055175 logs -n 25: (1.380433249s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-399936                                                                                                                                                                                                               │ disable-driver-mounts-399936 │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:44 UTC │
	│ start   │ -p default-k8s-diff-port-942905 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p newest-cni-708733 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:44 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable metrics-server -p embed-certs-055175 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ stop    │ -p embed-certs-055175 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable metrics-server -p newest-cni-708733 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ stop    │ -p newest-cni-708733 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable dashboard -p embed-certs-055175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p embed-certs-055175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:46 UTC │
	│ addons  │ enable dashboard -p newest-cni-708733 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p newest-cni-708733 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-942905 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-942905 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:46 UTC │
	│ image   │ newest-cni-708733 image list --format=json                                                                                                                                                                                                    │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ pause   │ -p newest-cni-708733 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │                     │
	│ delete  │ -p newest-cni-708733                                                                                                                                                                                                                          │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ delete  │ -p newest-cni-708733                                                                                                                                                                                                                          │ newest-cni-708733            │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:45 UTC │
	│ start   │ -p auto-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:45 UTC │ 18 Oct 25 09:46 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-942905 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ start   │ -p default-k8s-diff-port-942905 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	│ image   │ embed-certs-055175 image list --format=json                                                                                                                                                                                                   │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ pause   │ -p embed-certs-055175 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-055175           │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	│ ssh     │ -p auto-345705 pgrep -a kubelet                                                                                                                                                                                                               │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ start   │ -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-919613    │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	│ start   │ -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-919613    │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:46:37
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:46:37.903503  407893 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:46:37.903850  407893 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:46:37.903859  407893 out.go:374] Setting ErrFile to fd 2...
	I1018 09:46:37.903866  407893 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:46:37.904138  407893 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:46:37.904707  407893 out.go:368] Setting JSON to false
	I1018 09:46:37.906409  407893 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5342,"bootTime":1760775456,"procs":355,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:46:37.906545  407893 start.go:141] virtualization: kvm guest
	I1018 09:46:37.908526  407893 out.go:179] * [kubernetes-upgrade-919613] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:46:37.911955  407893 notify.go:220] Checking for updates...
	I1018 09:46:37.913372  407893 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:46:37.915512  407893 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:46:37.917115  407893 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:46:37.919628  407893 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:46:37.921514  407893 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:46:37.922611  407893 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:46:37.925140  407893 config.go:182] Loaded profile config "kubernetes-upgrade-919613": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:46:37.925926  407893 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:46:37.967338  407893 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:46:37.967444  407893 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:46:38.065729  407893 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-18 09:46:38.044281857 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:46:38.065956  407893 docker.go:318] overlay module found
	I1018 09:46:38.068150  407893 out.go:179] * Using the docker driver based on existing profile
	I1018 09:46:38.069227  407893 start.go:305] selected driver: docker
	I1018 09:46:38.069246  407893 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-919613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-919613 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:46:38.069351  407893 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:46:38.070192  407893 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:46:38.160934  407893 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-18 09:46:38.147599001 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:46:38.161583  407893 cni.go:84] Creating CNI manager for ""
	I1018 09:46:38.161656  407893 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 09:46:38.161702  407893 start.go:349] cluster config:
	{Name:kubernetes-upgrade-919613 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-919613 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:46:38.163283  407893 out.go:179] * Starting "kubernetes-upgrade-919613" primary control-plane node in "kubernetes-upgrade-919613" cluster
	I1018 09:46:38.164790  407893 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:46:38.166095  407893 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:46:38.167107  407893 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:46:38.167167  407893 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:46:38.167188  407893 cache.go:58] Caching tarball of preloaded images
	I1018 09:46:38.167288  407893 preload.go:233] Found /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:46:38.167304  407893 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:46:38.167420  407893 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/kubernetes-upgrade-919613/config.json ...
	I1018 09:46:38.167714  407893 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:46:38.198470  407893 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:46:38.198494  407893 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:46:38.198518  407893 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:46:38.198557  407893 start.go:360] acquireMachinesLock for kubernetes-upgrade-919613: {Name:mk92109118ac59f1f43ff70ee533a211e2119f34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:46:38.198625  407893 start.go:364] duration metric: took 43.626µs to acquireMachinesLock for "kubernetes-upgrade-919613"
	I1018 09:46:38.198654  407893 start.go:96] Skipping create...Using existing machine configuration
	I1018 09:46:38.198667  407893 fix.go:54] fixHost starting: 
	I1018 09:46:38.199009  407893 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-919613 --format={{.State.Status}}
	I1018 09:46:38.225051  407893 fix.go:112] recreateIfNeeded on kubernetes-upgrade-919613: state=Running err=<nil>
	W1018 09:46:38.225110  407893 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 18 09:46:06 embed-certs-055175 crio[569]: time="2025-10-18T09:46:06.739813749Z" level=info msg="Started container" PID=1754 containerID=8a57209e7385ed384a05fdc6e608d391c28485e1e585900dcef5d84f29ead062 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729/dashboard-metrics-scraper id=4dfe9b46-151d-463e-bf19-4bf841a83e06 name=/runtime.v1.RuntimeService/StartContainer sandboxID=15c5c582ed677d3f977eb7b8a16c11dd5c0376df50082d8ab46f87d7831029e2
	Oct 18 09:46:06 embed-certs-055175 crio[569]: time="2025-10-18T09:46:06.793445343Z" level=info msg="Removing container: 5c29590fffc97e54ce536272251223b4ccf35091afdddc9849d7773b0772abed" id=5cbf729b-6eeb-4818-a5c8-f86e827a5c22 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:46:06 embed-certs-055175 crio[569]: time="2025-10-18T09:46:06.803709463Z" level=info msg="Removed container 5c29590fffc97e54ce536272251223b4ccf35091afdddc9849d7773b0772abed: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729/dashboard-metrics-scraper" id=5cbf729b-6eeb-4818-a5c8-f86e827a5c22 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.811488494Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9bf357b8-5e92-42ed-b626-2ac6d084fe99 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.812474089Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8c3e0980-28e2-46fd-bb26-58cc56678b92 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.814314085Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=dd152ca9-bbaf-4d62-b335-150c428e0dd3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.814578574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.819407994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.819607854Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9968955a7e6b954be5a30a0fd1a682309cc5a343a80e7b976a730ad96b139a71/merged/etc/passwd: no such file or directory"
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.81963995Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9968955a7e6b954be5a30a0fd1a682309cc5a343a80e7b976a730ad96b139a71/merged/etc/group: no such file or directory"
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.819918441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.852770092Z" level=info msg="Created container eb19b9973edd2e5885c0817788667f192b9bccac8eb33da85d7ba2b3696335af: kube-system/storage-provisioner/storage-provisioner" id=dd152ca9-bbaf-4d62-b335-150c428e0dd3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.853613248Z" level=info msg="Starting container: eb19b9973edd2e5885c0817788667f192b9bccac8eb33da85d7ba2b3696335af" id=6e810bc2-c1ac-4b21-a12f-c8972e1a87cd name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:46:12 embed-certs-055175 crio[569]: time="2025-10-18T09:46:12.856187215Z" level=info msg="Started container" PID=1768 containerID=eb19b9973edd2e5885c0817788667f192b9bccac8eb33da85d7ba2b3696335af description=kube-system/storage-provisioner/storage-provisioner id=6e810bc2-c1ac-4b21-a12f-c8972e1a87cd name=/runtime.v1.RuntimeService/StartContainer sandboxID=5a8eaca320a2f3cf0b8a43acc2f6bac8b5e7ebcc1f500cc21bab0220d5907455
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.679761584Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=033e421a-2f1a-4f2c-826f-4258a9b4728e name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.68078595Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0c9c81d5-3c67-4d3a-9d6d-03f29bbf394b name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.682013153Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729/dashboard-metrics-scraper" id=8bee58b6-67af-4309-a895-dcdf380dbd3b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.682279481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.688541666Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.689109429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.718541431Z" level=info msg="Created container d47777b68d3c4c1e6a384ff338c901429116f30e9a77d8534cdbdade36f15a3c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729/dashboard-metrics-scraper" id=8bee58b6-67af-4309-a895-dcdf380dbd3b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.719234065Z" level=info msg="Starting container: d47777b68d3c4c1e6a384ff338c901429116f30e9a77d8534cdbdade36f15a3c" id=0fd36466-333f-4de8-8567-2df5b4d243d4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.72127391Z" level=info msg="Started container" PID=1804 containerID=d47777b68d3c4c1e6a384ff338c901429116f30e9a77d8534cdbdade36f15a3c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729/dashboard-metrics-scraper id=0fd36466-333f-4de8-8567-2df5b4d243d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=15c5c582ed677d3f977eb7b8a16c11dd5c0376df50082d8ab46f87d7831029e2
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.866545412Z" level=info msg="Removing container: 8a57209e7385ed384a05fdc6e608d391c28485e1e585900dcef5d84f29ead062" id=e20ed6a9-98a9-4c56-918b-970de2c95b1b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:46:30 embed-certs-055175 crio[569]: time="2025-10-18T09:46:30.87638922Z" level=info msg="Removed container 8a57209e7385ed384a05fdc6e608d391c28485e1e585900dcef5d84f29ead062: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729/dashboard-metrics-scraper" id=e20ed6a9-98a9-4c56-918b-970de2c95b1b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d47777b68d3c4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago        Exited              dashboard-metrics-scraper   3                   15c5c582ed677       dashboard-metrics-scraper-6ffb444bf9-l9729   kubernetes-dashboard
	eb19b9973edd2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   5a8eaca320a2f       storage-provisioner                          kube-system
	492f754e3064c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago       Running             kubernetes-dashboard        0                   40376facc707b       kubernetes-dashboard-855c9754f9-5ddr7        kubernetes-dashboard
	deb20ecd58a6f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   5836c3ce58123       busybox                                      default
	7532c8b9596c0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           58 seconds ago       Running             coredns                     0                   7268dcec2d1fb       coredns-66bc5c9577-ksdf9                     kube-system
	fca12026bf0b1       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   3e8d4d152a834       kindnet-tntfx                                kube-system
	cfb7f8b3c954a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   5a8eaca320a2f       storage-provisioner                          kube-system
	18b9b557a1a00       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           58 seconds ago       Running             kube-proxy                  0                   480841c7ca9a0       kube-proxy-9n98q                             kube-system
	82544c7a6f005       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   3d26a36ca5806       kube-scheduler-embed-certs-055175            kube-system
	d8a28c141ac16       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   b85b22c0c63ee       etcd-embed-certs-055175                      kube-system
	f427b03ed9ce5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   1d59202d0504e       kube-apiserver-embed-certs-055175            kube-system
	0aa95bb2edd15       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   9af16a1156df6       kube-controller-manager-embed-certs-055175   kube-system
	
	
	==> coredns [7532c8b9596c037e46d007cefb401457054eec5cbee4a52ea325b5f3828bb3f9] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45054 - 63898 "HINFO IN 5170228139375379588.4831894331947021424. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.05520709s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-055175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-055175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=embed-certs-055175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_44_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:44:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-055175
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:46:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:46:11 +0000   Sat, 18 Oct 2025 09:44:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:46:11 +0000   Sat, 18 Oct 2025 09:44:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:46:11 +0000   Sat, 18 Oct 2025 09:44:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:46:11 +0000   Sat, 18 Oct 2025 09:45:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-055175
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                a753bc03-5449-4387-b526-2cbb885beb79
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-ksdf9                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-embed-certs-055175                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-tntfx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-embed-certs-055175             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-embed-certs-055175    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-9n98q                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-embed-certs-055175             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-l9729    0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5ddr7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node embed-certs-055175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node embed-certs-055175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node embed-certs-055175 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node embed-certs-055175 event: Registered Node embed-certs-055175 in Controller
	  Normal  NodeReady                100s               kubelet          Node embed-certs-055175 status is now: NodeReady
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node embed-certs-055175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node embed-certs-055175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node embed-certs-055175 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                node-controller  Node embed-certs-055175 event: Registered Node embed-certs-055175 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [d8a28c141ac160edf272ee9ddf9b12c1548c335130b27e90935ec06e6a60642f] <==
	{"level":"warn","ts":"2025-10-18T09:45:40.195652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.202210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.210993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.218708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.227447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.235630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.241977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.250635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.257121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.263590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.270900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.277237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.284881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.292225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.298910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.306396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.313564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.326027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.329748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.345590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:45:40.405673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:01.576704Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.2517ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356040983161112 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-kxikw4ohcsavdzb77m6qwlog7y\" mod_revision:577 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-kxikw4ohcsavdzb77m6qwlog7y\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-kxikw4ohcsavdzb77m6qwlog7y\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-18T09:46:01.576818Z","caller":"traceutil/trace.go:172","msg":"trace[1910276683] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"178.962172ms","start":"2025-10-18T09:46:01.397839Z","end":"2025-10-18T09:46:01.576801Z","steps":["trace[1910276683] 'process raft request'  (duration: 53.050253ms)","trace[1910276683] 'compare'  (duration: 125.121379ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-18T09:46:01.590784Z","caller":"traceutil/trace.go:172","msg":"trace[2078134268] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"167.678962ms","start":"2025-10-18T09:46:01.423086Z","end":"2025-10-18T09:46:01.590765Z","steps":["trace[2078134268] 'process raft request'  (duration: 167.561806ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-18T09:46:01.717527Z","caller":"traceutil/trace.go:172","msg":"trace[499519936] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"257.47078ms","start":"2025-10-18T09:46:01.460025Z","end":"2025-10-18T09:46:01.717496Z","steps":["trace[499519936] 'process raft request'  (duration: 246.427258ms)","trace[499519936] 'compare'  (duration: 10.794075ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:46:40 up  1:29,  0 user,  load average: 4.82, 3.24, 2.07
	Linux embed-certs-055175 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fca12026bf0b1ba5900afb94e683550d1e47af8a207f77fcb266172b3322547a] <==
	I1018 09:45:42.289015       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:45:42.289297       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1018 09:45:42.289447       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:45:42.289465       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:45:42.289478       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:45:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:45:42.492436       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:45:42.492478       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:45:42.492508       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:45:42.492641       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:45:42.988958       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:45:42.988996       1 metrics.go:72] Registering metrics
	I1018 09:45:42.989118       1 controller.go:711] "Syncing nftables rules"
	I1018 09:45:52.492993       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:45:52.493062       1 main.go:301] handling current node
	I1018 09:46:02.492899       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:46:02.492968       1 main.go:301] handling current node
	I1018 09:46:12.493328       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:46:12.493368       1 main.go:301] handling current node
	I1018 09:46:22.492894       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:46:22.492970       1 main.go:301] handling current node
	I1018 09:46:32.492984       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1018 09:46:32.493021       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f427b03ed9ce5db5e74d4069d94ff8948e7da816ad588e3c12fbdd22293cab0d] <==
	I1018 09:45:40.978941       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:45:40.978951       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:45:40.978958       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:45:40.978964       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:45:40.974772       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1018 09:45:40.974362       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1018 09:45:40.979924       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:45:40.981071       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 09:45:40.985840       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:45:40.989517       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:45:40.989547       1 policy_source.go:240] refreshing policies
	I1018 09:45:40.994399       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1018 09:45:41.004343       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:45:41.077950       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:45:41.319030       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:45:41.347778       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:45:41.369242       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:45:41.380490       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:45:41.387761       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:45:41.422930       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.38.12"}
	I1018 09:45:41.442056       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.62.121"}
	I1018 09:45:41.877915       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:45:43.651995       1 controller.go:667] quota admission added evaluator for: endpoints
	I1018 09:45:44.045390       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:45:44.197558       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0aa95bb2edd15a27792b1a5bd689c87bd9d233036c37d823aff945a5fee1dc2d] <==
	I1018 09:45:43.640624       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:45:43.640918       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-055175"
	I1018 09:45:43.640986       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 09:45:43.641055       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1018 09:45:43.640728       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1018 09:45:43.641558       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1018 09:45:43.641707       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1018 09:45:43.642249       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1018 09:45:43.642387       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:45:43.642534       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:45:43.642570       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:45:43.643286       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:45:43.644860       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:45:43.645389       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:45:43.647358       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1018 09:45:43.650129       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1018 09:45:43.651302       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:45:43.651413       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1018 09:45:43.653682       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:45:43.655685       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1018 09:45:43.658009       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1018 09:45:43.660589       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1018 09:45:43.668879       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1018 09:45:43.681042       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:45:43.691029       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [18b9b557a1a00e9e1345cbaf906acf8f76759deaa3ffbb5c5956d703f09a134d] <==
	I1018 09:45:42.101039       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:45:42.167571       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:45:42.268714       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:45:42.268745       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1018 09:45:42.268836       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:45:42.291866       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:45:42.291941       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:45:42.298046       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:45:42.298480       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:45:42.298516       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:45:42.302465       1 config.go:200] "Starting service config controller"
	I1018 09:45:42.302487       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:45:42.302501       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:45:42.302507       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:45:42.302527       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:45:42.302532       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:45:42.302785       1 config.go:309] "Starting node config controller"
	I1018 09:45:42.302798       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:45:42.403250       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:45:42.403261       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1018 09:45:42.403300       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:45:42.403285       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [82544c7a6f005e8210b717c0d6b26b29c0f08fbb6fdae3002ce9107b8c924a75] <==
	I1018 09:45:39.797865       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:45:40.897020       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:45:40.897060       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1018 09:45:40.897072       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:45:40.897081       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:45:40.959804       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:45:40.959947       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:45:40.964487       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:45:40.964572       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:45:40.965527       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:45:40.965601       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:45:41.065567       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:45:47 embed-certs-055175 kubelet[728]: I1018 09:45:47.735150     728 scope.go:117] "RemoveContainer" containerID="27c525c795f2123be01ce577d6b1f6a8fab55c25719657873e31cd200093572a"
	Oct 18 09:45:48 embed-certs-055175 kubelet[728]: I1018 09:45:48.741131     728 scope.go:117] "RemoveContainer" containerID="27c525c795f2123be01ce577d6b1f6a8fab55c25719657873e31cd200093572a"
	Oct 18 09:45:48 embed-certs-055175 kubelet[728]: I1018 09:45:48.741316     728 scope.go:117] "RemoveContainer" containerID="5c29590fffc97e54ce536272251223b4ccf35091afdddc9849d7773b0772abed"
	Oct 18 09:45:48 embed-certs-055175 kubelet[728]: E1018 09:45:48.741586     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l9729_kubernetes-dashboard(46196800-7b0f-4c69-9e40-79dee66580a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729" podUID="46196800-7b0f-4c69-9e40-79dee66580a9"
	Oct 18 09:45:49 embed-certs-055175 kubelet[728]: I1018 09:45:49.745605     728 scope.go:117] "RemoveContainer" containerID="5c29590fffc97e54ce536272251223b4ccf35091afdddc9849d7773b0772abed"
	Oct 18 09:45:49 embed-certs-055175 kubelet[728]: E1018 09:45:49.745800     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l9729_kubernetes-dashboard(46196800-7b0f-4c69-9e40-79dee66580a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729" podUID="46196800-7b0f-4c69-9e40-79dee66580a9"
	Oct 18 09:45:51 embed-certs-055175 kubelet[728]: I1018 09:45:51.187281     728 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 09:45:53 embed-certs-055175 kubelet[728]: I1018 09:45:53.186934     728 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5ddr7" podStartSLOduration=2.351417981 podStartE2EDuration="9.186913519s" podCreationTimestamp="2025-10-18 09:45:44 +0000 UTC" firstStartedPulling="2025-10-18 09:45:44.460197125 +0000 UTC m=+5.866402172" lastFinishedPulling="2025-10-18 09:45:51.295692678 +0000 UTC m=+12.701897710" observedRunningTime="2025-10-18 09:45:51.764238925 +0000 UTC m=+13.170443977" watchObservedRunningTime="2025-10-18 09:45:53.186913519 +0000 UTC m=+14.593118567"
	Oct 18 09:45:55 embed-certs-055175 kubelet[728]: I1018 09:45:55.656424     728 scope.go:117] "RemoveContainer" containerID="5c29590fffc97e54ce536272251223b4ccf35091afdddc9849d7773b0772abed"
	Oct 18 09:45:55 embed-certs-055175 kubelet[728]: E1018 09:45:55.656653     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l9729_kubernetes-dashboard(46196800-7b0f-4c69-9e40-79dee66580a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729" podUID="46196800-7b0f-4c69-9e40-79dee66580a9"
	Oct 18 09:46:06 embed-certs-055175 kubelet[728]: I1018 09:46:06.679626     728 scope.go:117] "RemoveContainer" containerID="5c29590fffc97e54ce536272251223b4ccf35091afdddc9849d7773b0772abed"
	Oct 18 09:46:06 embed-certs-055175 kubelet[728]: I1018 09:46:06.792097     728 scope.go:117] "RemoveContainer" containerID="5c29590fffc97e54ce536272251223b4ccf35091afdddc9849d7773b0772abed"
	Oct 18 09:46:06 embed-certs-055175 kubelet[728]: I1018 09:46:06.792345     728 scope.go:117] "RemoveContainer" containerID="8a57209e7385ed384a05fdc6e608d391c28485e1e585900dcef5d84f29ead062"
	Oct 18 09:46:06 embed-certs-055175 kubelet[728]: E1018 09:46:06.792578     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l9729_kubernetes-dashboard(46196800-7b0f-4c69-9e40-79dee66580a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729" podUID="46196800-7b0f-4c69-9e40-79dee66580a9"
	Oct 18 09:46:12 embed-certs-055175 kubelet[728]: I1018 09:46:12.811086     728 scope.go:117] "RemoveContainer" containerID="cfb7f8b3c954a45f3592ad652b5d22760faec41160d64c4ca5ea64b499628f20"
	Oct 18 09:46:15 embed-certs-055175 kubelet[728]: I1018 09:46:15.655714     728 scope.go:117] "RemoveContainer" containerID="8a57209e7385ed384a05fdc6e608d391c28485e1e585900dcef5d84f29ead062"
	Oct 18 09:46:15 embed-certs-055175 kubelet[728]: E1018 09:46:15.655973     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l9729_kubernetes-dashboard(46196800-7b0f-4c69-9e40-79dee66580a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729" podUID="46196800-7b0f-4c69-9e40-79dee66580a9"
	Oct 18 09:46:30 embed-certs-055175 kubelet[728]: I1018 09:46:30.679258     728 scope.go:117] "RemoveContainer" containerID="8a57209e7385ed384a05fdc6e608d391c28485e1e585900dcef5d84f29ead062"
	Oct 18 09:46:30 embed-certs-055175 kubelet[728]: I1018 09:46:30.865237     728 scope.go:117] "RemoveContainer" containerID="8a57209e7385ed384a05fdc6e608d391c28485e1e585900dcef5d84f29ead062"
	Oct 18 09:46:30 embed-certs-055175 kubelet[728]: I1018 09:46:30.865489     728 scope.go:117] "RemoveContainer" containerID="d47777b68d3c4c1e6a384ff338c901429116f30e9a77d8534cdbdade36f15a3c"
	Oct 18 09:46:30 embed-certs-055175 kubelet[728]: E1018 09:46:30.865676     728 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l9729_kubernetes-dashboard(46196800-7b0f-4c69-9e40-79dee66580a9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l9729" podUID="46196800-7b0f-4c69-9e40-79dee66580a9"
	Oct 18 09:46:35 embed-certs-055175 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:46:35 embed-certs-055175 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:46:35 embed-certs-055175 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:46:35 embed-certs-055175 systemd[1]: kubelet.service: Consumed 1.830s CPU time.
	
	
	==> kubernetes-dashboard [492f754e3064cb75bcfcd048c637bf1d922ca2b1f7c946df701660dacb55b5b6] <==
	2025/10/18 09:45:51 Starting overwatch
	2025/10/18 09:45:51 Using namespace: kubernetes-dashboard
	2025/10/18 09:45:51 Using in-cluster config to connect to apiserver
	2025/10/18 09:45:51 Using secret token for csrf signing
	2025/10/18 09:45:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:45:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:45:51 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:45:51 Generating JWE encryption key
	2025/10/18 09:45:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:45:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:45:51 Initializing JWE encryption key from synchronized object
	2025/10/18 09:45:51 Creating in-cluster Sidecar client
	2025/10/18 09:45:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:45:51 Serving insecurely on HTTP port: 9090
	2025/10/18 09:46:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [cfb7f8b3c954a45f3592ad652b5d22760faec41160d64c4ca5ea64b499628f20] <==
	I1018 09:45:42.066410       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:46:12.069325       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [eb19b9973edd2e5885c0817788667f192b9bccac8eb33da85d7ba2b3696335af] <==
	I1018 09:46:12.878271       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:46:12.878326       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:46:12.880736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:16.336494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:20.597601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:24.196037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:27.283484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:30.319495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:30.332477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:46:30.332687       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:46:30.332772       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2f02da72-07ef-40c7-b357-1999f0a74d4d", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-055175_15be879a-fbaa-444a-9bfc-a30ab4247d8f became leader
	I1018 09:46:30.332860       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-055175_15be879a-fbaa-444a-9bfc-a30ab4247d8f!
	W1018 09:46:30.335498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:30.338369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:46:30.433626       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-055175_15be879a-fbaa-444a-9bfc-a30ab4247d8f!
	W1018 09:46:32.341531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:32.345502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:34.349359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:34.353161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:36.358103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:36.362523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:38.367590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:38.373159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:40.377118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:40.382542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-055175 -n embed-certs-055175
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-055175 -n embed-certs-055175: exit status 2 (338.353042ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-055175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-942905 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-942905 --alsologtostderr -v=1: exit status 80 (2.501420642s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-942905 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:47:02.676589  419176 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:47:02.676920  419176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:47:02.676931  419176 out.go:374] Setting ErrFile to fd 2...
	I1018 09:47:02.676935  419176 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:47:02.677193  419176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:47:02.677508  419176 out.go:368] Setting JSON to false
	I1018 09:47:02.677535  419176 mustload.go:65] Loading cluster: default-k8s-diff-port-942905
	I1018 09:47:02.678056  419176 config.go:182] Loaded profile config "default-k8s-diff-port-942905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:47:02.678670  419176 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-942905 --format={{.State.Status}}
	I1018 09:47:02.700641  419176 host.go:66] Checking if "default-k8s-diff-port-942905" exists ...
	I1018 09:47:02.701028  419176 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:47:02.806407  419176 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-18 09:47:02.78994488 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:47:02.808042  419176 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-942905 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1018 09:47:02.810012  419176 out.go:179] * Pausing node default-k8s-diff-port-942905 ... 
	I1018 09:47:02.811099  419176 host.go:66] Checking if "default-k8s-diff-port-942905" exists ...
	I1018 09:47:02.811463  419176 ssh_runner.go:195] Run: systemctl --version
	I1018 09:47:02.811554  419176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-942905
	I1018 09:47:02.839478  419176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33234 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/default-k8s-diff-port-942905/id_rsa Username:docker}
	I1018 09:47:02.951135  419176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:47:02.979961  419176 pause.go:52] kubelet running: true
	I1018 09:47:02.980039  419176 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:47:03.225909  419176 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:47:03.226045  419176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:47:03.303020  419176 cri.go:89] found id: "e8f6f2f0c69081c9e986e1cb8b40c8a12a4d8785f7a80ae9aa9cfc74ccc81d3d"
	I1018 09:47:03.303119  419176 cri.go:89] found id: "21802871fa1331d84d1fa487b00b614455584cc8d2041b8d618ee4a615d48804"
	I1018 09:47:03.303131  419176 cri.go:89] found id: "8b084e558fd84916ef47e7eaa9ae3efc62932788a9d7aebc2afab7d9b669b8d0"
	I1018 09:47:03.303136  419176 cri.go:89] found id: "8a0116addb51231bf7c34dcf64ceb1af03ae8d72cdc1f69232c2ab4b0af736d4"
	I1018 09:47:03.303142  419176 cri.go:89] found id: "02d960c0e61242ffff4e9fcd0c35c06d979cf2d48707f0653267afb54dda8b23"
	I1018 09:47:03.303149  419176 cri.go:89] found id: "53c162813a56d295f5c9bcb964babffaba1ef65c7c7abd379dcda49590ad1624"
	I1018 09:47:03.303155  419176 cri.go:89] found id: "c1d5522dfa9c2b152efa910b995d7591a777193dd2a5d91b03598fa2e0d960d7"
	I1018 09:47:03.303163  419176 cri.go:89] found id: "064212d5e2e85f534b67da4cce1414ed832093be45a30363299ae9169f550be2"
	I1018 09:47:03.303167  419176 cri.go:89] found id: "776062d447e4140eaa670ac1d98115ec30e1134f45a3a41e47e11885ee45e152"
	I1018 09:47:03.303179  419176 cri.go:89] found id: "9848de10f90a2d3cc6f9a9ec6b8480bcd5c643cec64a7b84c85b4a4650106520"
	I1018 09:47:03.303183  419176 cri.go:89] found id: "82abe805433defecdbe599791f5d38a0c1802aefc0033670a50919ab6805830e"
	I1018 09:47:03.303188  419176 cri.go:89] found id: ""
	I1018 09:47:03.303241  419176 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:47:03.315302  419176 retry.go:31] will retry after 144.14686ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:47:03Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:47:03.459674  419176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:47:03.472576  419176 pause.go:52] kubelet running: false
	I1018 09:47:03.472645  419176 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:47:03.618650  419176 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:47:03.618758  419176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:47:03.699221  419176 cri.go:89] found id: "e8f6f2f0c69081c9e986e1cb8b40c8a12a4d8785f7a80ae9aa9cfc74ccc81d3d"
	I1018 09:47:03.699246  419176 cri.go:89] found id: "21802871fa1331d84d1fa487b00b614455584cc8d2041b8d618ee4a615d48804"
	I1018 09:47:03.699252  419176 cri.go:89] found id: "8b084e558fd84916ef47e7eaa9ae3efc62932788a9d7aebc2afab7d9b669b8d0"
	I1018 09:47:03.699256  419176 cri.go:89] found id: "8a0116addb51231bf7c34dcf64ceb1af03ae8d72cdc1f69232c2ab4b0af736d4"
	I1018 09:47:03.699261  419176 cri.go:89] found id: "02d960c0e61242ffff4e9fcd0c35c06d979cf2d48707f0653267afb54dda8b23"
	I1018 09:47:03.699265  419176 cri.go:89] found id: "53c162813a56d295f5c9bcb964babffaba1ef65c7c7abd379dcda49590ad1624"
	I1018 09:47:03.699268  419176 cri.go:89] found id: "c1d5522dfa9c2b152efa910b995d7591a777193dd2a5d91b03598fa2e0d960d7"
	I1018 09:47:03.699272  419176 cri.go:89] found id: "064212d5e2e85f534b67da4cce1414ed832093be45a30363299ae9169f550be2"
	I1018 09:47:03.699276  419176 cri.go:89] found id: "776062d447e4140eaa670ac1d98115ec30e1134f45a3a41e47e11885ee45e152"
	I1018 09:47:03.699292  419176 cri.go:89] found id: "9848de10f90a2d3cc6f9a9ec6b8480bcd5c643cec64a7b84c85b4a4650106520"
	I1018 09:47:03.699296  419176 cri.go:89] found id: "82abe805433defecdbe599791f5d38a0c1802aefc0033670a50919ab6805830e"
	I1018 09:47:03.699300  419176 cri.go:89] found id: ""
	I1018 09:47:03.699364  419176 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:47:03.712416  419176 retry.go:31] will retry after 304.590657ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:47:03Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:47:04.017979  419176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:47:04.033621  419176 pause.go:52] kubelet running: false
	I1018 09:47:04.033684  419176 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:47:04.218045  419176 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:47:04.218146  419176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:47:04.300915  419176 cri.go:89] found id: "e8f6f2f0c69081c9e986e1cb8b40c8a12a4d8785f7a80ae9aa9cfc74ccc81d3d"
	I1018 09:47:04.300936  419176 cri.go:89] found id: "21802871fa1331d84d1fa487b00b614455584cc8d2041b8d618ee4a615d48804"
	I1018 09:47:04.300939  419176 cri.go:89] found id: "8b084e558fd84916ef47e7eaa9ae3efc62932788a9d7aebc2afab7d9b669b8d0"
	I1018 09:47:04.300943  419176 cri.go:89] found id: "8a0116addb51231bf7c34dcf64ceb1af03ae8d72cdc1f69232c2ab4b0af736d4"
	I1018 09:47:04.300945  419176 cri.go:89] found id: "02d960c0e61242ffff4e9fcd0c35c06d979cf2d48707f0653267afb54dda8b23"
	I1018 09:47:04.300948  419176 cri.go:89] found id: "53c162813a56d295f5c9bcb964babffaba1ef65c7c7abd379dcda49590ad1624"
	I1018 09:47:04.300951  419176 cri.go:89] found id: "c1d5522dfa9c2b152efa910b995d7591a777193dd2a5d91b03598fa2e0d960d7"
	I1018 09:47:04.300953  419176 cri.go:89] found id: "064212d5e2e85f534b67da4cce1414ed832093be45a30363299ae9169f550be2"
	I1018 09:47:04.300956  419176 cri.go:89] found id: "776062d447e4140eaa670ac1d98115ec30e1134f45a3a41e47e11885ee45e152"
	I1018 09:47:04.300971  419176 cri.go:89] found id: "9848de10f90a2d3cc6f9a9ec6b8480bcd5c643cec64a7b84c85b4a4650106520"
	I1018 09:47:04.300973  419176 cri.go:89] found id: "82abe805433defecdbe599791f5d38a0c1802aefc0033670a50919ab6805830e"
	I1018 09:47:04.300976  419176 cri.go:89] found id: ""
	I1018 09:47:04.301012  419176 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:47:04.315206  419176 retry.go:31] will retry after 472.2145ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:47:04Z" level=error msg="open /run/runc: no such file or directory"
	I1018 09:47:04.787891  419176 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:47:04.808708  419176 pause.go:52] kubelet running: false
	I1018 09:47:04.808769  419176 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1018 09:47:04.990433  419176 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1018 09:47:04.990529  419176 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1018 09:47:05.087547  419176 cri.go:89] found id: "e8f6f2f0c69081c9e986e1cb8b40c8a12a4d8785f7a80ae9aa9cfc74ccc81d3d"
	I1018 09:47:05.087572  419176 cri.go:89] found id: "21802871fa1331d84d1fa487b00b614455584cc8d2041b8d618ee4a615d48804"
	I1018 09:47:05.087578  419176 cri.go:89] found id: "8b084e558fd84916ef47e7eaa9ae3efc62932788a9d7aebc2afab7d9b669b8d0"
	I1018 09:47:05.087582  419176 cri.go:89] found id: "8a0116addb51231bf7c34dcf64ceb1af03ae8d72cdc1f69232c2ab4b0af736d4"
	I1018 09:47:05.087586  419176 cri.go:89] found id: "02d960c0e61242ffff4e9fcd0c35c06d979cf2d48707f0653267afb54dda8b23"
	I1018 09:47:05.087591  419176 cri.go:89] found id: "53c162813a56d295f5c9bcb964babffaba1ef65c7c7abd379dcda49590ad1624"
	I1018 09:47:05.087596  419176 cri.go:89] found id: "c1d5522dfa9c2b152efa910b995d7591a777193dd2a5d91b03598fa2e0d960d7"
	I1018 09:47:05.087600  419176 cri.go:89] found id: "064212d5e2e85f534b67da4cce1414ed832093be45a30363299ae9169f550be2"
	I1018 09:47:05.087604  419176 cri.go:89] found id: "776062d447e4140eaa670ac1d98115ec30e1134f45a3a41e47e11885ee45e152"
	I1018 09:47:05.087611  419176 cri.go:89] found id: "9848de10f90a2d3cc6f9a9ec6b8480bcd5c643cec64a7b84c85b4a4650106520"
	I1018 09:47:05.087615  419176 cri.go:89] found id: "82abe805433defecdbe599791f5d38a0c1802aefc0033670a50919ab6805830e"
	I1018 09:47:05.087619  419176 cri.go:89] found id: ""
	I1018 09:47:05.087662  419176 ssh_runner.go:195] Run: sudo runc list -f json
	I1018 09:47:05.106749  419176 out.go:203] 
	W1018 09:47:05.108243  419176 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:47:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:47:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1018 09:47:05.108260  419176 out.go:285] * 
	* 
	W1018 09:47:05.114809  419176 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1018 09:47:05.117248  419176 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-942905 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-942905
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-942905:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01",
	        "Created": "2025-10-18T09:44:58.37670581Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 401045,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:46:02.830725868Z",
	            "FinishedAt": "2025-10-18T09:46:01.761013496Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01/hosts",
	        "LogPath": "/var/lib/docker/containers/b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01/b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01-json.log",
	        "Name": "/default-k8s-diff-port-942905",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-942905:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-942905",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01",
	                "LowerDir": "/var/lib/docker/overlay2/42f1fb875c8f127d75859f8b0b14ffd6f566d0b14d7eeb00ff5ff11c9fa43104-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/42f1fb875c8f127d75859f8b0b14ffd6f566d0b14d7eeb00ff5ff11c9fa43104/merged",
	                "UpperDir": "/var/lib/docker/overlay2/42f1fb875c8f127d75859f8b0b14ffd6f566d0b14d7eeb00ff5ff11c9fa43104/diff",
	                "WorkDir": "/var/lib/docker/overlay2/42f1fb875c8f127d75859f8b0b14ffd6f566d0b14d7eeb00ff5ff11c9fa43104/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-942905",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-942905/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-942905",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-942905",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-942905",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "86e24aafe1d6c9cc8b12b47df59ca428d52dcd84ec17bbbdb08085051fb9d0e6",
	            "SandboxKey": "/var/run/docker/netns/86e24aafe1d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33234"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33235"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33238"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33236"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33237"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-942905": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:48:82:c1:a6:4c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0fd78e2b1cc4903dcfba13e124358f0be34e6a060a2c5a3353848c2f3b6de6b8",
	                    "EndpointID": "1a98afc09aeedba6791a86b7ea52dd99f7e5297d973189cd9219ac132719692b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-942905",
	                        "b1c05e040b9d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-942905 -n default-k8s-diff-port-942905
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-942905 -n default-k8s-diff-port-942905: exit status 2 (450.947078ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-942905 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-942905 logs -n 25: (1.610159175s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-345705 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ ssh     │ -p auto-345705 sudo cat /etc/kubernetes/kubelet.conf                                                                                                               │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ ssh     │ -p auto-345705 sudo cat /var/lib/kubelet/config.yaml                                                                                                               │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ ssh     │ -p auto-345705 sudo systemctl status docker --all --full --no-pager                                                                                                │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	│ ssh     │ -p auto-345705 sudo systemctl cat docker --no-pager                                                                                                                │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ ssh     │ -p auto-345705 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	│ ssh     │ -p auto-345705 sudo docker system info                                                                                                                             │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	│ ssh     │ -p auto-345705 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	│ ssh     │ -p auto-345705 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ ssh     │ -p auto-345705 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	│ ssh     │ -p auto-345705 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ ssh     │ -p auto-345705 sudo cri-dockerd --version                                                                                                                          │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ ssh     │ -p auto-345705 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	│ ssh     │ -p auto-345705 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ ssh     │ -p auto-345705 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ ssh     │ -p auto-345705 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ ssh     │ -p auto-345705 sudo containerd config dump                                                                                                                         │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ ssh     │ -p auto-345705 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ ssh     │ -p auto-345705 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ ssh     │ -p auto-345705 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ image   │ default-k8s-diff-port-942905 image list --format=json                                                                                                              │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ ssh     │ -p auto-345705 sudo crio config                                                                                                                                    │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ pause   │ -p default-k8s-diff-port-942905 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │                     │
	│ delete  │ -p auto-345705                                                                                                                                                     │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ start   │ -p custom-flannel-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-345705        │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:47:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:47:05.601790  420154 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:47:05.603796  420154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:47:05.603842  420154 out.go:374] Setting ErrFile to fd 2...
	I1018 09:47:05.603852  420154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:47:05.604944  420154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:47:05.605695  420154 out.go:368] Setting JSON to false
	I1018 09:47:05.607465  420154 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5370,"bootTime":1760775456,"procs":320,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:47:05.607604  420154 start.go:141] virtualization: kvm guest
	I1018 09:47:05.608906  420154 out.go:179] * [custom-flannel-345705] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:47:05.610816  420154 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:47:05.610860  420154 notify.go:220] Checking for updates...
	I1018 09:47:05.612878  420154 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:47:05.614652  420154 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:47:05.615923  420154 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:47:05.617861  420154 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:47:05.618986  420154 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	
	
	==> CRI-O <==
	Oct 18 09:46:22 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:22.794580153Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:46:22 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:22.798513347Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:46:22 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:22.798532356Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:46:36 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:36.985793886Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=58995dcf-942d-478b-abcc-090ba68fd32d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:46:36 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:36.989222164Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=68c6f5cb-15a5-45c2-8bef-a09a8121a031 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:46:36 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:36.994190493Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v/dashboard-metrics-scraper" id=54f8b655-9324-4850-80b5-73a9b251b6d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:46:36 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:36.996893975Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:37 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:37.007081377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:37 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:37.007751298Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:37 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:37.041009847Z" level=info msg="Created container 9848de10f90a2d3cc6f9a9ec6b8480bcd5c643cec64a7b84c85b4a4650106520: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v/dashboard-metrics-scraper" id=54f8b655-9324-4850-80b5-73a9b251b6d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:46:37 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:37.042330188Z" level=info msg="Starting container: 9848de10f90a2d3cc6f9a9ec6b8480bcd5c643cec64a7b84c85b4a4650106520" id=f96c4cd5-4257-4bb9-b901-4f9c98c88a02 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:46:37 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:37.045267807Z" level=info msg="Started container" PID=1744 containerID=9848de10f90a2d3cc6f9a9ec6b8480bcd5c643cec64a7b84c85b4a4650106520 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v/dashboard-metrics-scraper id=f96c4cd5-4257-4bb9-b901-4f9c98c88a02 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5ebf9016dfa181b96cea3fec4ee533f637d822f6f7873121719e37609b8e65b3
	Oct 18 09:46:37 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:37.095867754Z" level=info msg="Removing container: 29092e2680b6df022dc167400025a5542cf03b9b39e968fe03ffe756eb12d848" id=11f82728-8e22-4c20-935a-aa0495348a30 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:46:37 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:37.108694187Z" level=info msg="Removed container 29092e2680b6df022dc167400025a5542cf03b9b39e968fe03ffe756eb12d848: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v/dashboard-metrics-scraper" id=11f82728-8e22-4c20-935a-aa0495348a30 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.118447735Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=80f17300-77fb-4ddb-b10a-c971af586a0c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.121151807Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f3f26ad9-d7a8-4374-b2d9-42a09aaad502 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.122278147Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=efba847b-e108-4cd3-87d3-346b8bd690ba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.122553032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.128715334Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.128888796Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7b507c4018a4d93b47383bb20628343fbddab0565741172ccd95e92f2e272b1d/merged/etc/passwd: no such file or directory"
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.1289235Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7b507c4018a4d93b47383bb20628343fbddab0565741172ccd95e92f2e272b1d/merged/etc/group: no such file or directory"
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.129151528Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.159092461Z" level=info msg="Created container e8f6f2f0c69081c9e986e1cb8b40c8a12a4d8785f7a80ae9aa9cfc74ccc81d3d: kube-system/storage-provisioner/storage-provisioner" id=efba847b-e108-4cd3-87d3-346b8bd690ba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.159983131Z" level=info msg="Starting container: e8f6f2f0c69081c9e986e1cb8b40c8a12a4d8785f7a80ae9aa9cfc74ccc81d3d" id=0b85db91-453e-4951-8316-1ebc2f3754d5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.162278782Z" level=info msg="Started container" PID=1758 containerID=e8f6f2f0c69081c9e986e1cb8b40c8a12a4d8785f7a80ae9aa9cfc74ccc81d3d description=kube-system/storage-provisioner/storage-provisioner id=0b85db91-453e-4951-8316-1ebc2f3754d5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=84a49c8f455d2bde09a51fc11eed92aad65f9c6ecb2a7c46110f9635e06fff7e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	e8f6f2f0c6908       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   84a49c8f455d2       storage-provisioner                                    kube-system
	9848de10f90a2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           29 seconds ago      Exited              dashboard-metrics-scraper   2                   5ebf9016dfa18       dashboard-metrics-scraper-6ffb444bf9-9jl2v             kubernetes-dashboard
	82abe805433de       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   c50ebf440566a       kubernetes-dashboard-855c9754f9-4zp6s                  kubernetes-dashboard
	21802871fa133       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   4db80367206a4       coredns-66bc5c9577-g6bf9                               kube-system
	2c950dedcdc79       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   a3df39d8aade6       busybox                                                default
	8b084e558fd84       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   30f198d1a183c       kindnet-xtmcm                                          kube-system
	8a0116addb512       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   84a49c8f455d2       storage-provisioner                                    kube-system
	02d960c0e6124       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   2f96c17406d5d       kube-proxy-x9fjs                                       kube-system
	53c162813a56d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   1419a113a6730       kube-apiserver-default-k8s-diff-port-942905            kube-system
	c1d5522dfa9c2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   04064a1aa7e8a       kube-scheduler-default-k8s-diff-port-942905            kube-system
	064212d5e2e85       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   ca2b06a4a3520       etcd-default-k8s-diff-port-942905                      kube-system
	776062d447e41       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   d0586d4908d87       kube-controller-manager-default-k8s-diff-port-942905   kube-system
	
	
	==> coredns [21802871fa1331d84d1fa487b00b614455584cc8d2041b8d618ee4a615d48804] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36932 - 62992 "HINFO IN 318217287814050399.5282288501228099778. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.024684964s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-942905
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-942905
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=default-k8s-diff-port-942905
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_45_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:45:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-942905
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:46:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:46:52 +0000   Sat, 18 Oct 2025 09:45:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:46:52 +0000   Sat, 18 Oct 2025 09:45:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:46:52 +0000   Sat, 18 Oct 2025 09:45:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:46:52 +0000   Sat, 18 Oct 2025 09:45:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-942905
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                2840e9d8-1f17-40a1-ae4d-ed361a5c39b0
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-g6bf9                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-default-k8s-diff-port-942905                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-xtmcm                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-default-k8s-diff-port-942905             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-942905    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-x9fjs                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-default-k8s-diff-port-942905             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-9jl2v              0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4zp6s                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 118s)  kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    113s                 kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s                 kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  113s                 kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           109s                 node-controller  Node default-k8s-diff-port-942905 event: Registered Node default-k8s-diff-port-942905 in Controller
	  Normal  NodeReady                97s                  kubelet          Node default-k8s-diff-port-942905 status is now: NodeReady
	  Normal  Starting                 58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 58s)    kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 58s)    kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 58s)    kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                  node-controller  Node default-k8s-diff-port-942905 event: Registered Node default-k8s-diff-port-942905 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [064212d5e2e85f534b67da4cce1414ed832093be45a30363299ae9169f550be2] <==
	{"level":"warn","ts":"2025-10-18T09:46:10.981028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:10.992927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.000776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.007872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.015355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.023278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.030617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.036616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.043455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.050926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.057710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.064311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.070746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.078218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.085493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.093532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.100420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.107688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.114941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.121955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.129656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.144856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.153678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.162938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.225964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34782","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:47:06 up  1:29,  0 user,  load average: 6.27, 3.69, 2.25
	Linux default-k8s-diff-port-942905 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8b084e558fd84916ef47e7eaa9ae3efc62932788a9d7aebc2afab7d9b669b8d0] <==
	I1018 09:46:12.481875       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:46:12.482128       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1018 09:46:12.482307       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:46:12.482325       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:46:12.482352       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:46:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:46:12.777280       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:46:12.777346       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:46:12.777357       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:46:12.777491       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:46:13.178148       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:46:13.178171       1 metrics.go:72] Registering metrics
	I1018 09:46:13.178238       1 controller.go:711] "Syncing nftables rules"
	I1018 09:46:22.776928       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:46:22.777028       1 main.go:301] handling current node
	I1018 09:46:32.781892       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:46:32.781922       1 main.go:301] handling current node
	I1018 09:46:42.776491       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:46:42.776533       1 main.go:301] handling current node
	I1018 09:46:52.778912       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:46:52.778960       1 main.go:301] handling current node
	I1018 09:47:02.785928       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:47:02.785970       1 main.go:301] handling current node
	
	
	==> kube-apiserver [53c162813a56d295f5c9bcb964babffaba1ef65c7c7abd379dcda49590ad1624] <==
	I1018 09:46:11.747462       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:46:11.747580       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 09:46:11.750124       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:46:11.750477       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:46:11.750487       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:46:11.750493       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:46:11.750499       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:46:11.758079       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 09:46:11.760352       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:46:11.765961       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:46:11.775837       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:46:11.775939       1 policy_source.go:240] refreshing policies
	I1018 09:46:11.790854       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:46:11.816867       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:46:12.076187       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:46:12.104676       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:46:12.106194       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:46:12.132069       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:46:12.138966       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:46:12.176539       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.245.25"}
	I1018 09:46:12.186070       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.153.83"}
	I1018 09:46:12.648617       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:46:14.681392       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:46:14.977939       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:46:15.029585       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [776062d447e4140eaa670ac1d98115ec30e1134f45a3a41e47e11885ee45e152] <==
	I1018 09:46:14.373997       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:46:14.374011       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:46:14.374424       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:46:14.374542       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:46:14.374669       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:46:14.375101       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:46:14.375104       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:46:14.375255       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:46:14.375385       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-942905"
	I1018 09:46:14.375437       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 09:46:14.375641       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:46:14.375688       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:46:14.375838       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:46:14.375907       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:46:14.375961       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:46:14.381334       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 09:46:14.381464       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:46:14.381507       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:46:14.381515       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:46:14.381521       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:46:14.381963       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:46:14.384051       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:46:14.390332       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:46:14.394688       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:46:14.442008       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [02d960c0e61242ffff4e9fcd0c35c06d979cf2d48707f0653267afb54dda8b23] <==
	I1018 09:46:12.387451       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:46:12.450562       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:46:12.551185       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:46:12.551235       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1018 09:46:12.551332       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:46:12.570770       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:46:12.570817       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:46:12.575923       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:46:12.576284       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:46:12.576301       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:46:12.578181       1 config.go:309] "Starting node config controller"
	I1018 09:46:12.578211       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:46:12.578220       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:46:12.578234       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:46:12.578251       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:46:12.578274       1 config.go:200] "Starting service config controller"
	I1018 09:46:12.578299       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:46:12.578512       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:46:12.578888       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:46:12.678699       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:46:12.678711       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:46:12.679876       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c1d5522dfa9c2b152efa910b995d7591a777193dd2a5d91b03598fa2e0d960d7] <==
	I1018 09:46:10.406425       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:46:11.696804       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:46:11.696852       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1018 09:46:11.696866       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:46:11.696878       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:46:11.732710       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:46:11.732742       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:46:11.735752       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:46:11.735806       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:46:11.736554       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:46:11.736639       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:46:11.836203       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:46:14 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:14.923394     720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/92964e9c-974b-45c0-99fd-c175df299295-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-4zp6s\" (UID: \"92964e9c-974b-45c0-99fd-c175df299295\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4zp6s"
	Oct 18 09:46:14 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:14.923416     720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwzff\" (UniqueName: \"kubernetes.io/projected/92964e9c-974b-45c0-99fd-c175df299295-kube-api-access-jwzff\") pod \"kubernetes-dashboard-855c9754f9-4zp6s\" (UID: \"92964e9c-974b-45c0-99fd-c175df299295\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4zp6s"
	Oct 18 09:46:18 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:18.924881     720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 09:46:19 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:19.043268     720 scope.go:117] "RemoveContainer" containerID="f08e9a2a19e8fab94f56d13bf8dbe111641f4cee79c103c0ef764cf46b4b3dca"
	Oct 18 09:46:20 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:20.048007     720 scope.go:117] "RemoveContainer" containerID="f08e9a2a19e8fab94f56d13bf8dbe111641f4cee79c103c0ef764cf46b4b3dca"
	Oct 18 09:46:20 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:20.048379     720 scope.go:117] "RemoveContainer" containerID="29092e2680b6df022dc167400025a5542cf03b9b39e968fe03ffe756eb12d848"
	Oct 18 09:46:20 default-k8s-diff-port-942905 kubelet[720]: E1018 09:46:20.048576     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jl2v_kubernetes-dashboard(f38fa20f-fbae-4afa-a11e-2ad189b49cb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v" podUID="f38fa20f-fbae-4afa-a11e-2ad189b49cb5"
	Oct 18 09:46:21 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:21.052868     720 scope.go:117] "RemoveContainer" containerID="29092e2680b6df022dc167400025a5542cf03b9b39e968fe03ffe756eb12d848"
	Oct 18 09:46:21 default-k8s-diff-port-942905 kubelet[720]: E1018 09:46:21.053052     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jl2v_kubernetes-dashboard(f38fa20f-fbae-4afa-a11e-2ad189b49cb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v" podUID="f38fa20f-fbae-4afa-a11e-2ad189b49cb5"
	Oct 18 09:46:23 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:23.068874     720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4zp6s" podStartSLOduration=2.112345023 podStartE2EDuration="9.068850261s" podCreationTimestamp="2025-10-18 09:46:14 +0000 UTC" firstStartedPulling="2025-10-18 09:46:15.194973047 +0000 UTC m=+6.308701008" lastFinishedPulling="2025-10-18 09:46:22.151478272 +0000 UTC m=+13.265206246" observedRunningTime="2025-10-18 09:46:23.068538629 +0000 UTC m=+14.182266606" watchObservedRunningTime="2025-10-18 09:46:23.068850261 +0000 UTC m=+14.182578230"
	Oct 18 09:46:23 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:23.537772     720 scope.go:117] "RemoveContainer" containerID="29092e2680b6df022dc167400025a5542cf03b9b39e968fe03ffe756eb12d848"
	Oct 18 09:46:23 default-k8s-diff-port-942905 kubelet[720]: E1018 09:46:23.538030     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jl2v_kubernetes-dashboard(f38fa20f-fbae-4afa-a11e-2ad189b49cb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v" podUID="f38fa20f-fbae-4afa-a11e-2ad189b49cb5"
	Oct 18 09:46:36 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:36.985279     720 scope.go:117] "RemoveContainer" containerID="29092e2680b6df022dc167400025a5542cf03b9b39e968fe03ffe756eb12d848"
	Oct 18 09:46:37 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:37.093499     720 scope.go:117] "RemoveContainer" containerID="29092e2680b6df022dc167400025a5542cf03b9b39e968fe03ffe756eb12d848"
	Oct 18 09:46:37 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:37.094087     720 scope.go:117] "RemoveContainer" containerID="9848de10f90a2d3cc6f9a9ec6b8480bcd5c643cec64a7b84c85b4a4650106520"
	Oct 18 09:46:37 default-k8s-diff-port-942905 kubelet[720]: E1018 09:46:37.095236     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jl2v_kubernetes-dashboard(f38fa20f-fbae-4afa-a11e-2ad189b49cb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v" podUID="f38fa20f-fbae-4afa-a11e-2ad189b49cb5"
	Oct 18 09:46:43 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:43.117858     720 scope.go:117] "RemoveContainer" containerID="8a0116addb51231bf7c34dcf64ceb1af03ae8d72cdc1f69232c2ab4b0af736d4"
	Oct 18 09:46:43 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:43.538185     720 scope.go:117] "RemoveContainer" containerID="9848de10f90a2d3cc6f9a9ec6b8480bcd5c643cec64a7b84c85b4a4650106520"
	Oct 18 09:46:43 default-k8s-diff-port-942905 kubelet[720]: E1018 09:46:43.538375     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jl2v_kubernetes-dashboard(f38fa20f-fbae-4afa-a11e-2ad189b49cb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v" podUID="f38fa20f-fbae-4afa-a11e-2ad189b49cb5"
	Oct 18 09:46:54 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:54.985076     720 scope.go:117] "RemoveContainer" containerID="9848de10f90a2d3cc6f9a9ec6b8480bcd5c643cec64a7b84c85b4a4650106520"
	Oct 18 09:46:54 default-k8s-diff-port-942905 kubelet[720]: E1018 09:46:54.985708     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jl2v_kubernetes-dashboard(f38fa20f-fbae-4afa-a11e-2ad189b49cb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v" podUID="f38fa20f-fbae-4afa-a11e-2ad189b49cb5"
	Oct 18 09:47:03 default-k8s-diff-port-942905 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:47:03 default-k8s-diff-port-942905 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:47:03 default-k8s-diff-port-942905 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:47:03 default-k8s-diff-port-942905 systemd[1]: kubelet.service: Consumed 1.752s CPU time.
	
	
	==> kubernetes-dashboard [82abe805433defecdbe599791f5d38a0c1802aefc0033670a50919ab6805830e] <==
	2025/10/18 09:46:22 Starting overwatch
	2025/10/18 09:46:22 Using namespace: kubernetes-dashboard
	2025/10/18 09:46:22 Using in-cluster config to connect to apiserver
	2025/10/18 09:46:22 Using secret token for csrf signing
	2025/10/18 09:46:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:46:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:46:22 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:46:22 Generating JWE encryption key
	2025/10/18 09:46:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:46:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:46:22 Initializing JWE encryption key from synchronized object
	2025/10/18 09:46:22 Creating in-cluster Sidecar client
	2025/10/18 09:46:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:46:22 Serving insecurely on HTTP port: 9090
	2025/10/18 09:46:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [8a0116addb51231bf7c34dcf64ceb1af03ae8d72cdc1f69232c2ab4b0af736d4] <==
	I1018 09:46:12.362455       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:46:42.364255       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e8f6f2f0c69081c9e986e1cb8b40c8a12a4d8785f7a80ae9aa9cfc74ccc81d3d] <==
	I1018 09:46:43.174717       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:46:43.184771       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:46:43.184864       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:46:43.187311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:46.642893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:50.904427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:54.503129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:57.557090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:00.579280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:00.584936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:47:00.585101       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:47:00.585248       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-942905_22024dc6-4df0-48db-8a01-8064aa87ecad!
	I1018 09:47:00.585224       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc0e8d2d-9133-4c3a-bcf4-257c6fc89570", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-942905_22024dc6-4df0-48db-8a01-8064aa87ecad became leader
	W1018 09:47:00.587806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:00.590844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:47:00.685482       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-942905_22024dc6-4df0-48db-8a01-8064aa87ecad!
	W1018 09:47:02.594879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:02.599995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:04.603516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:04.607397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:06.617463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:06.643154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-942905 -n default-k8s-diff-port-942905
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-942905 -n default-k8s-diff-port-942905: exit status 2 (342.427045ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-942905 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-942905
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-942905:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01",
	        "Created": "2025-10-18T09:44:58.37670581Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 401045,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-18T09:46:02.830725868Z",
	            "FinishedAt": "2025-10-18T09:46:01.761013496Z"
	        },
	        "Image": "sha256:713c129c627219853b562feca35c3e2fb5544c1fdac756c8255f63f0d7b93507",
	        "ResolvConfPath": "/var/lib/docker/containers/b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01/hosts",
	        "LogPath": "/var/lib/docker/containers/b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01/b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01-json.log",
	        "Name": "/default-k8s-diff-port-942905",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-942905:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-942905",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1c05e040b9d3fa1d87b7bd476cc3f1a1d956d0fa8a23aa3241c3ca0c7a27b01",
	                "LowerDir": "/var/lib/docker/overlay2/42f1fb875c8f127d75859f8b0b14ffd6f566d0b14d7eeb00ff5ff11c9fa43104-init/diff:/var/lib/docker/overlay2/bc3e8faff0c6f2e9d36a28ad2baf2ecd48584bc3108044c85fcd648d94c3d259/diff",
	                "MergedDir": "/var/lib/docker/overlay2/42f1fb875c8f127d75859f8b0b14ffd6f566d0b14d7eeb00ff5ff11c9fa43104/merged",
	                "UpperDir": "/var/lib/docker/overlay2/42f1fb875c8f127d75859f8b0b14ffd6f566d0b14d7eeb00ff5ff11c9fa43104/diff",
	                "WorkDir": "/var/lib/docker/overlay2/42f1fb875c8f127d75859f8b0b14ffd6f566d0b14d7eeb00ff5ff11c9fa43104/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-942905",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-942905/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-942905",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-942905",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-942905",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "86e24aafe1d6c9cc8b12b47df59ca428d52dcd84ec17bbbdb08085051fb9d0e6",
	            "SandboxKey": "/var/run/docker/netns/86e24aafe1d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33234"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33235"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33238"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33236"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33237"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-942905": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:48:82:c1:a6:4c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0fd78e2b1cc4903dcfba13e124358f0be34e6a060a2c5a3353848c2f3b6de6b8",
	                    "EndpointID": "1a98afc09aeedba6791a86b7ea52dd99f7e5297d973189cd9219ac132719692b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-942905",
	                        "b1c05e040b9d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-942905 -n default-k8s-diff-port-942905
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-942905 -n default-k8s-diff-port-942905: exit status 2 (344.366873ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-942905 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-942905 logs -n 25: (1.287279723s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-345705 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ ssh     │ -p auto-345705 sudo cat /etc/kubernetes/kubelet.conf                                                                                                               │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ ssh     │ -p auto-345705 sudo cat /var/lib/kubelet/config.yaml                                                                                                               │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ ssh     │ -p auto-345705 sudo systemctl status docker --all --full --no-pager                                                                                                │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	│ ssh     │ -p auto-345705 sudo systemctl cat docker --no-pager                                                                                                                │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ ssh     │ -p auto-345705 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	│ ssh     │ -p auto-345705 sudo docker system info                                                                                                                             │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	│ ssh     │ -p auto-345705 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	│ ssh     │ -p auto-345705 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ ssh     │ -p auto-345705 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	│ ssh     │ -p auto-345705 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ ssh     │ -p auto-345705 sudo cri-dockerd --version                                                                                                                          │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │ 18 Oct 25 09:46 UTC │
	│ ssh     │ -p auto-345705 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:46 UTC │                     │
	│ ssh     │ -p auto-345705 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ ssh     │ -p auto-345705 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ ssh     │ -p auto-345705 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ ssh     │ -p auto-345705 sudo containerd config dump                                                                                                                         │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ ssh     │ -p auto-345705 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ ssh     │ -p auto-345705 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ ssh     │ -p auto-345705 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ image   │ default-k8s-diff-port-942905 image list --format=json                                                                                                              │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ ssh     │ -p auto-345705 sudo crio config                                                                                                                                    │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ pause   │ -p default-k8s-diff-port-942905 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-942905 │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │                     │
	│ delete  │ -p auto-345705                                                                                                                                                     │ auto-345705                  │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │ 18 Oct 25 09:47 UTC │
	│ start   │ -p custom-flannel-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-345705        │ jenkins │ v1.37.0 │ 18 Oct 25 09:47 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 09:47:05
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 09:47:05.601790  420154 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:47:05.603796  420154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:47:05.603842  420154 out.go:374] Setting ErrFile to fd 2...
	I1018 09:47:05.603852  420154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:47:05.604944  420154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:47:05.605695  420154 out.go:368] Setting JSON to false
	I1018 09:47:05.607465  420154 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5370,"bootTime":1760775456,"procs":320,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:47:05.607604  420154 start.go:141] virtualization: kvm guest
	I1018 09:47:05.608906  420154 out.go:179] * [custom-flannel-345705] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:47:05.610816  420154 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:47:05.610860  420154 notify.go:220] Checking for updates...
	I1018 09:47:05.612878  420154 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:47:05.614652  420154 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:47:05.615923  420154 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:47:05.617861  420154 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:47:05.618986  420154 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:47:05.626163  420154 config.go:182] Loaded profile config "calico-345705": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:47:05.626317  420154 config.go:182] Loaded profile config "default-k8s-diff-port-942905": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:47:05.626425  420154 config.go:182] Loaded profile config "kindnet-345705": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:47:05.626533  420154 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:47:05.666809  420154 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:47:05.666925  420154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:47:05.749814  420154 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 09:47:05.736258062 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:47:05.750094  420154 docker.go:318] overlay module found
	I1018 09:47:05.752924  420154 out.go:179] * Using the docker driver based on user configuration
	I1018 09:47:05.754258  420154 start.go:305] selected driver: docker
	I1018 09:47:05.754278  420154 start.go:925] validating driver "docker" against <nil>
	I1018 09:47:05.754293  420154 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:47:05.756285  420154 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:47:05.852167  420154 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-18 09:47:05.840304892 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:47:05.852414  420154 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 09:47:05.852715  420154 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1018 09:47:05.854348  420154 out.go:179] * Using Docker driver with root privileges
	I1018 09:47:05.858445  420154 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1018 09:47:05.858477  420154 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1018 09:47:05.858575  420154 start.go:349] cluster config:
	{Name:custom-flannel-345705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-345705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:47:05.864052  420154 out.go:179] * Starting "custom-flannel-345705" primary control-plane node in "custom-flannel-345705" cluster
	I1018 09:47:05.865749  420154 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 09:47:05.867138  420154 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1018 09:47:05.869347  420154 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 09:47:05.869419  420154 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 09:47:05.869437  420154 cache.go:58] Caching tarball of preloaded images
	I1018 09:47:05.869470  420154 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 09:47:05.869541  420154 preload.go:233] Found /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1018 09:47:05.869553  420154 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1018 09:47:05.869697  420154 profile.go:143] Saving config to /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/custom-flannel-345705/config.json ...
	I1018 09:47:05.869726  420154 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/custom-flannel-345705/config.json: {Name:mk9503dc846b72b4e8174954ba310e44ca98385b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1018 09:47:05.899731  420154 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1018 09:47:05.899816  420154 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1018 09:47:05.899865  420154 cache.go:232] Successfully downloaded all kic artifacts
	I1018 09:47:05.899896  420154 start.go:360] acquireMachinesLock for custom-flannel-345705: {Name:mkfd24f8ff48f860d8d70ce24577e3256ae00f69 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1018 09:47:05.900070  420154 start.go:364] duration metric: took 117.113µs to acquireMachinesLock for "custom-flannel-345705"
	I1018 09:47:05.900130  420154 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-345705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-345705 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1018 09:47:05.900229  420154 start.go:125] createHost starting for "" (driver="docker")
	I1018 09:47:03.541494  411471 out.go:252]   - Booting up control plane ...
	I1018 09:47:03.541628  411471 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:47:03.541730  411471 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:47:03.541807  411471 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:47:03.555301  411471 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:47:03.555450  411471 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:47:03.561813  411471 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:47:03.562052  411471 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:47:03.562096  411471 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:47:03.662725  411471 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:47:03.662908  411471 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:47:04.163655  411471 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.016064ms
	I1018 09:47:04.167365  411471 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:47:04.167500  411471 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1018 09:47:04.167679  411471 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:47:04.167906  411471 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:47:06.138588  410748 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1018 09:47:06.138658  410748 kubeadm.go:318] [preflight] Running pre-flight checks
	I1018 09:47:06.138772  410748 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1018 09:47:06.138865  410748 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1041-gcp
	I1018 09:47:06.138912  410748 kubeadm.go:318] OS: Linux
	I1018 09:47:06.138973  410748 kubeadm.go:318] CGROUPS_CPU: enabled
	I1018 09:47:06.139039  410748 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1018 09:47:06.139105  410748 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1018 09:47:06.139172  410748 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1018 09:47:06.139236  410748 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1018 09:47:06.139298  410748 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1018 09:47:06.139484  410748 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1018 09:47:06.139568  410748 kubeadm.go:318] CGROUPS_IO: enabled
	I1018 09:47:06.139684  410748 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1018 09:47:06.139876  410748 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1018 09:47:06.140128  410748 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1018 09:47:06.140280  410748 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1018 09:47:06.142354  410748 out.go:252]   - Generating certificates and keys ...
	I1018 09:47:06.142540  410748 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1018 09:47:06.143166  410748 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1018 09:47:06.143938  410748 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1018 09:47:06.144064  410748 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1018 09:47:06.144195  410748 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1018 09:47:06.144329  410748 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1018 09:47:06.144440  410748 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1018 09:47:06.144644  410748 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [kindnet-345705 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:47:06.144723  410748 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1018 09:47:06.144899  410748 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [kindnet-345705 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1018 09:47:06.144992  410748 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1018 09:47:06.145077  410748 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1018 09:47:06.145136  410748 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1018 09:47:06.145208  410748 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1018 09:47:06.145275  410748 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1018 09:47:06.145342  410748 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1018 09:47:06.145407  410748 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1018 09:47:06.145504  410748 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1018 09:47:06.145569  410748 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1018 09:47:06.145673  410748 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1018 09:47:06.145758  410748 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1018 09:47:06.147414  410748 out.go:252]   - Booting up control plane ...
	I1018 09:47:06.147589  410748 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1018 09:47:06.148356  410748 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1018 09:47:06.148524  410748 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1018 09:47:06.148779  410748 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1018 09:47:06.148959  410748 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1018 09:47:06.149279  410748 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1018 09:47:06.149401  410748 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1018 09:47:06.149455  410748 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1018 09:47:06.149633  410748 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1018 09:47:06.149777  410748 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1018 09:47:06.150116  410748 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501751578s
	I1018 09:47:06.150289  410748 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1018 09:47:06.150456  410748 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1018 09:47:06.150784  410748 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1018 09:47:06.150991  410748 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1018 09:47:06.151178  410748 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.639662304s
	I1018 09:47:06.151933  410748 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.276691524s
	I1018 09:47:06.152325  410748 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.002693315s
	I1018 09:47:06.152547  410748 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1018 09:47:06.152869  410748 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1018 09:47:06.153009  410748 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1018 09:47:06.153369  410748 kubeadm.go:318] [mark-control-plane] Marking the node kindnet-345705 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1018 09:47:06.153450  410748 kubeadm.go:318] [bootstrap-token] Using token: 1nc264.954ui4u1v5wk03t3
	I1018 09:47:06.154717  410748 out.go:252]   - Configuring RBAC rules ...
	I1018 09:47:06.154878  410748 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1018 09:47:06.154990  410748 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1018 09:47:06.155171  410748 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1018 09:47:06.155329  410748 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1018 09:47:06.155470  410748 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1018 09:47:06.155574  410748 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1018 09:47:06.155717  410748 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1018 09:47:06.155771  410748 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1018 09:47:06.155884  410748 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1018 09:47:06.155891  410748 kubeadm.go:318] 
	I1018 09:47:06.155991  410748 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1018 09:47:06.156081  410748 kubeadm.go:318] 
	I1018 09:47:06.156248  410748 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1018 09:47:06.156279  410748 kubeadm.go:318] 
	I1018 09:47:06.156353  410748 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1018 09:47:06.156484  410748 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1018 09:47:06.156645  410748 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1018 09:47:06.156653  410748 kubeadm.go:318] 
	I1018 09:47:06.156736  410748 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1018 09:47:06.156742  410748 kubeadm.go:318] 
	I1018 09:47:06.156856  410748 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1018 09:47:06.156926  410748 kubeadm.go:318] 
	I1018 09:47:06.157028  410748 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1018 09:47:06.157413  410748 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1018 09:47:06.157548  410748 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1018 09:47:06.157554  410748 kubeadm.go:318] 
	I1018 09:47:06.157675  410748 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1018 09:47:06.157780  410748 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1018 09:47:06.157785  410748 kubeadm.go:318] 
	I1018 09:47:06.157917  410748 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 1nc264.954ui4u1v5wk03t3 \
	I1018 09:47:06.158063  410748 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f \
	I1018 09:47:06.158092  410748 kubeadm.go:318] 	--control-plane 
	I1018 09:47:06.158097  410748 kubeadm.go:318] 
	I1018 09:47:06.158974  410748 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1018 09:47:06.158994  410748 kubeadm.go:318] 
	I1018 09:47:06.159096  410748 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 1nc264.954ui4u1v5wk03t3 \
	I1018 09:47:06.159271  410748 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:74ba1df2d569500585b8d763842957df688dbadb09e84f454440385dfae73a3f 
	I1018 09:47:06.159288  410748 cni.go:84] Creating CNI manager for "kindnet"
	I1018 09:47:06.162664  410748 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Oct 18 09:46:22 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:22.794580153Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 18 09:46:22 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:22.798513347Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 18 09:46:22 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:22.798532356Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 18 09:46:36 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:36.985793886Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=58995dcf-942d-478b-abcc-090ba68fd32d name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:46:36 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:36.989222164Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=68c6f5cb-15a5-45c2-8bef-a09a8121a031 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:46:36 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:36.994190493Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v/dashboard-metrics-scraper" id=54f8b655-9324-4850-80b5-73a9b251b6d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:46:36 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:36.996893975Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:37 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:37.007081377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:37 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:37.007751298Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:37 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:37.041009847Z" level=info msg="Created container 9848de10f90a2d3cc6f9a9ec6b8480bcd5c643cec64a7b84c85b4a4650106520: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v/dashboard-metrics-scraper" id=54f8b655-9324-4850-80b5-73a9b251b6d8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:46:37 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:37.042330188Z" level=info msg="Starting container: 9848de10f90a2d3cc6f9a9ec6b8480bcd5c643cec64a7b84c85b4a4650106520" id=f96c4cd5-4257-4bb9-b901-4f9c98c88a02 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:46:37 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:37.045267807Z" level=info msg="Started container" PID=1744 containerID=9848de10f90a2d3cc6f9a9ec6b8480bcd5c643cec64a7b84c85b4a4650106520 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v/dashboard-metrics-scraper id=f96c4cd5-4257-4bb9-b901-4f9c98c88a02 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5ebf9016dfa181b96cea3fec4ee533f637d822f6f7873121719e37609b8e65b3
	Oct 18 09:46:37 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:37.095867754Z" level=info msg="Removing container: 29092e2680b6df022dc167400025a5542cf03b9b39e968fe03ffe756eb12d848" id=11f82728-8e22-4c20-935a-aa0495348a30 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:46:37 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:37.108694187Z" level=info msg="Removed container 29092e2680b6df022dc167400025a5542cf03b9b39e968fe03ffe756eb12d848: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v/dashboard-metrics-scraper" id=11f82728-8e22-4c20-935a-aa0495348a30 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.118447735Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=80f17300-77fb-4ddb-b10a-c971af586a0c name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.121151807Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f3f26ad9-d7a8-4374-b2d9-42a09aaad502 name=/runtime.v1.ImageService/ImageStatus
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.122278147Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=efba847b-e108-4cd3-87d3-346b8bd690ba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.122553032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.128715334Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.128888796Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7b507c4018a4d93b47383bb20628343fbddab0565741172ccd95e92f2e272b1d/merged/etc/passwd: no such file or directory"
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.1289235Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7b507c4018a4d93b47383bb20628343fbddab0565741172ccd95e92f2e272b1d/merged/etc/group: no such file or directory"
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.129151528Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.159092461Z" level=info msg="Created container e8f6f2f0c69081c9e986e1cb8b40c8a12a4d8785f7a80ae9aa9cfc74ccc81d3d: kube-system/storage-provisioner/storage-provisioner" id=efba847b-e108-4cd3-87d3-346b8bd690ba name=/runtime.v1.RuntimeService/CreateContainer
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.159983131Z" level=info msg="Starting container: e8f6f2f0c69081c9e986e1cb8b40c8a12a4d8785f7a80ae9aa9cfc74ccc81d3d" id=0b85db91-453e-4951-8316-1ebc2f3754d5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 18 09:46:43 default-k8s-diff-port-942905 crio[558]: time="2025-10-18T09:46:43.162278782Z" level=info msg="Started container" PID=1758 containerID=e8f6f2f0c69081c9e986e1cb8b40c8a12a4d8785f7a80ae9aa9cfc74ccc81d3d description=kube-system/storage-provisioner/storage-provisioner id=0b85db91-453e-4951-8316-1ebc2f3754d5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=84a49c8f455d2bde09a51fc11eed92aad65f9c6ecb2a7c46110f9635e06fff7e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	e8f6f2f0c6908       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   84a49c8f455d2       storage-provisioner                                    kube-system
	9848de10f90a2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago      Exited              dashboard-metrics-scraper   2                   5ebf9016dfa18       dashboard-metrics-scraper-6ffb444bf9-9jl2v             kubernetes-dashboard
	82abe805433de       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   c50ebf440566a       kubernetes-dashboard-855c9754f9-4zp6s                  kubernetes-dashboard
	21802871fa133       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   4db80367206a4       coredns-66bc5c9577-g6bf9                               kube-system
	2c950dedcdc79       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   a3df39d8aade6       busybox                                                default
	8b084e558fd84       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   30f198d1a183c       kindnet-xtmcm                                          kube-system
	8a0116addb512       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   84a49c8f455d2       storage-provisioner                                    kube-system
	02d960c0e6124       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   2f96c17406d5d       kube-proxy-x9fjs                                       kube-system
	53c162813a56d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   1419a113a6730       kube-apiserver-default-k8s-diff-port-942905            kube-system
	c1d5522dfa9c2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   04064a1aa7e8a       kube-scheduler-default-k8s-diff-port-942905            kube-system
	064212d5e2e85       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   ca2b06a4a3520       etcd-default-k8s-diff-port-942905                      kube-system
	776062d447e41       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   d0586d4908d87       kube-controller-manager-default-k8s-diff-port-942905   kube-system
	
	
	==> coredns [21802871fa1331d84d1fa487b00b614455584cc8d2041b8d618ee4a615d48804] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36932 - 62992 "HINFO IN 318217287814050399.5282288501228099778. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.024684964s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-942905
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-942905
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=79bec0e4e6a9d2f11f51ad368067510a91b02e89
	                    minikube.k8s.io/name=default-k8s-diff-port-942905
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_18T09_45_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 18 Oct 2025 09:45:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-942905
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 18 Oct 2025 09:46:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 18 Oct 2025 09:46:52 +0000   Sat, 18 Oct 2025 09:45:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 18 Oct 2025 09:46:52 +0000   Sat, 18 Oct 2025 09:45:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 18 Oct 2025 09:46:52 +0000   Sat, 18 Oct 2025 09:45:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 18 Oct 2025 09:46:52 +0000   Sat, 18 Oct 2025 09:45:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-942905
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d003bb31a145a6c010d7ddda68f0c68d
	  System UUID:                2840e9d8-1f17-40a1-ae4d-ed361a5c39b0
	  Boot ID:                    315b43e4-7930-446b-aba3-f3ceaf080aec
	  Kernel Version:             6.8.0-1041-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-g6bf9                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-default-k8s-diff-port-942905                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-xtmcm                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-942905             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-942905    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-x9fjs                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-942905             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-9jl2v              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4zp6s                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           111s               node-controller  Node default-k8s-diff-port-942905 event: Registered Node default-k8s-diff-port-942905 in Controller
	  Normal  NodeReady                99s                kubelet          Node default-k8s-diff-port-942905 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 60s)  kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 60s)  kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 60s)  kubelet          Node default-k8s-diff-port-942905 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node default-k8s-diff-port-942905 event: Registered Node default-k8s-diff-port-942905 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa e0 7b 05 e4 80 08 06
	[  +4.589610] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 ce ed b5 f6 28 08 06
	[Oct18 09:01] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.048888] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023859] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023922] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.023890] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +1.024872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +2.046863] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[  +4.031611] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[Oct18 09:02] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +16.382660] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	[ +32.253344] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: e6 23 fa a1 9a 96 ee 1d 02 44 19 02 08 00
	
	
	==> etcd [064212d5e2e85f534b67da4cce1414ed832093be45a30363299ae9169f550be2] <==
	{"level":"warn","ts":"2025-10-18T09:46:10.981028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:10.992927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.000776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.007872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.015355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.023278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.030617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.036616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.043455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.050926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.057710Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.064311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.070746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.078218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.085493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.093532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.100420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.107688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.114941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.121955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.129656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.144856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.153678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.162938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-18T09:46:11.225964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34782","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:47:08 up  1:29,  0 user,  load average: 5.93, 3.66, 2.25
	Linux default-k8s-diff-port-942905 6.8.0-1041-gcp #43~22.04.1-Ubuntu SMP Wed Sep 24 23:11:19 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8b084e558fd84916ef47e7eaa9ae3efc62932788a9d7aebc2afab7d9b669b8d0] <==
	I1018 09:46:12.481875       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1018 09:46:12.482128       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1018 09:46:12.482307       1 main.go:148] setting mtu 1500 for CNI 
	I1018 09:46:12.482325       1 main.go:178] kindnetd IP family: "ipv4"
	I1018 09:46:12.482352       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-18T09:46:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1018 09:46:12.777280       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1018 09:46:12.777346       1 controller.go:381] "Waiting for informer caches to sync"
	I1018 09:46:12.777357       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1018 09:46:12.777491       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1018 09:46:13.178148       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1018 09:46:13.178171       1 metrics.go:72] Registering metrics
	I1018 09:46:13.178238       1 controller.go:711] "Syncing nftables rules"
	I1018 09:46:22.776928       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:46:22.777028       1 main.go:301] handling current node
	I1018 09:46:32.781892       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:46:32.781922       1 main.go:301] handling current node
	I1018 09:46:42.776491       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:46:42.776533       1 main.go:301] handling current node
	I1018 09:46:52.778912       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:46:52.778960       1 main.go:301] handling current node
	I1018 09:47:02.785928       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1018 09:47:02.785970       1 main.go:301] handling current node
	
	
	==> kube-apiserver [53c162813a56d295f5c9bcb964babffaba1ef65c7c7abd379dcda49590ad1624] <==
	I1018 09:46:11.747462       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1018 09:46:11.747580       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1018 09:46:11.750124       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1018 09:46:11.750477       1 aggregator.go:171] initial CRD sync complete...
	I1018 09:46:11.750487       1 autoregister_controller.go:144] Starting autoregister controller
	I1018 09:46:11.750493       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1018 09:46:11.750499       1 cache.go:39] Caches are synced for autoregister controller
	I1018 09:46:11.758079       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1018 09:46:11.760352       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1018 09:46:11.765961       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1018 09:46:11.775837       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1018 09:46:11.775939       1 policy_source.go:240] refreshing policies
	I1018 09:46:11.790854       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1018 09:46:11.816867       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1018 09:46:12.076187       1 controller.go:667] quota admission added evaluator for: namespaces
	I1018 09:46:12.104676       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1018 09:46:12.106194       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1018 09:46:12.132069       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1018 09:46:12.138966       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1018 09:46:12.176539       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.245.25"}
	I1018 09:46:12.186070       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.153.83"}
	I1018 09:46:12.648617       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1018 09:46:14.681392       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1018 09:46:14.977939       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1018 09:46:15.029585       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [776062d447e4140eaa670ac1d98115ec30e1134f45a3a41e47e11885ee45e152] <==
	I1018 09:46:14.373997       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1018 09:46:14.374011       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1018 09:46:14.374424       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1018 09:46:14.374542       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1018 09:46:14.374669       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1018 09:46:14.375101       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1018 09:46:14.375104       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1018 09:46:14.375255       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1018 09:46:14.375385       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-942905"
	I1018 09:46:14.375437       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1018 09:46:14.375641       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1018 09:46:14.375688       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1018 09:46:14.375838       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1018 09:46:14.375907       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1018 09:46:14.375961       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1018 09:46:14.381334       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1018 09:46:14.381464       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1018 09:46:14.381507       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1018 09:46:14.381515       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1018 09:46:14.381521       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1018 09:46:14.381963       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1018 09:46:14.384051       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1018 09:46:14.390332       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1018 09:46:14.394688       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1018 09:46:14.442008       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [02d960c0e61242ffff4e9fcd0c35c06d979cf2d48707f0653267afb54dda8b23] <==
	I1018 09:46:12.387451       1 server_linux.go:53] "Using iptables proxy"
	I1018 09:46:12.450562       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1018 09:46:12.551185       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1018 09:46:12.551235       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1018 09:46:12.551332       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1018 09:46:12.570770       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1018 09:46:12.570817       1 server_linux.go:132] "Using iptables Proxier"
	I1018 09:46:12.575923       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1018 09:46:12.576284       1 server.go:527] "Version info" version="v1.34.1"
	I1018 09:46:12.576301       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:46:12.578181       1 config.go:309] "Starting node config controller"
	I1018 09:46:12.578211       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1018 09:46:12.578220       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1018 09:46:12.578234       1 config.go:403] "Starting serviceCIDR config controller"
	I1018 09:46:12.578251       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1018 09:46:12.578274       1 config.go:200] "Starting service config controller"
	I1018 09:46:12.578299       1 config.go:106] "Starting endpoint slice config controller"
	I1018 09:46:12.578512       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1018 09:46:12.578888       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1018 09:46:12.678699       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1018 09:46:12.678711       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1018 09:46:12.679876       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [c1d5522dfa9c2b152efa910b995d7591a777193dd2a5d91b03598fa2e0d960d7] <==
	I1018 09:46:10.406425       1 serving.go:386] Generated self-signed cert in-memory
	W1018 09:46:11.696804       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1018 09:46:11.696852       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1018 09:46:11.696866       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1018 09:46:11.696878       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1018 09:46:11.732710       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1018 09:46:11.732742       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1018 09:46:11.735752       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:46:11.735806       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1018 09:46:11.736554       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1018 09:46:11.736639       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1018 09:46:11.836203       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 18 09:46:14 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:14.923394     720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/92964e9c-974b-45c0-99fd-c175df299295-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-4zp6s\" (UID: \"92964e9c-974b-45c0-99fd-c175df299295\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4zp6s"
	Oct 18 09:46:14 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:14.923416     720 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwzff\" (UniqueName: \"kubernetes.io/projected/92964e9c-974b-45c0-99fd-c175df299295-kube-api-access-jwzff\") pod \"kubernetes-dashboard-855c9754f9-4zp6s\" (UID: \"92964e9c-974b-45c0-99fd-c175df299295\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4zp6s"
	Oct 18 09:46:18 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:18.924881     720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 18 09:46:19 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:19.043268     720 scope.go:117] "RemoveContainer" containerID="f08e9a2a19e8fab94f56d13bf8dbe111641f4cee79c103c0ef764cf46b4b3dca"
	Oct 18 09:46:20 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:20.048007     720 scope.go:117] "RemoveContainer" containerID="f08e9a2a19e8fab94f56d13bf8dbe111641f4cee79c103c0ef764cf46b4b3dca"
	Oct 18 09:46:20 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:20.048379     720 scope.go:117] "RemoveContainer" containerID="29092e2680b6df022dc167400025a5542cf03b9b39e968fe03ffe756eb12d848"
	Oct 18 09:46:20 default-k8s-diff-port-942905 kubelet[720]: E1018 09:46:20.048576     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jl2v_kubernetes-dashboard(f38fa20f-fbae-4afa-a11e-2ad189b49cb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v" podUID="f38fa20f-fbae-4afa-a11e-2ad189b49cb5"
	Oct 18 09:46:21 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:21.052868     720 scope.go:117] "RemoveContainer" containerID="29092e2680b6df022dc167400025a5542cf03b9b39e968fe03ffe756eb12d848"
	Oct 18 09:46:21 default-k8s-diff-port-942905 kubelet[720]: E1018 09:46:21.053052     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jl2v_kubernetes-dashboard(f38fa20f-fbae-4afa-a11e-2ad189b49cb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v" podUID="f38fa20f-fbae-4afa-a11e-2ad189b49cb5"
	Oct 18 09:46:23 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:23.068874     720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4zp6s" podStartSLOduration=2.112345023 podStartE2EDuration="9.068850261s" podCreationTimestamp="2025-10-18 09:46:14 +0000 UTC" firstStartedPulling="2025-10-18 09:46:15.194973047 +0000 UTC m=+6.308701008" lastFinishedPulling="2025-10-18 09:46:22.151478272 +0000 UTC m=+13.265206246" observedRunningTime="2025-10-18 09:46:23.068538629 +0000 UTC m=+14.182266606" watchObservedRunningTime="2025-10-18 09:46:23.068850261 +0000 UTC m=+14.182578230"
	Oct 18 09:46:23 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:23.537772     720 scope.go:117] "RemoveContainer" containerID="29092e2680b6df022dc167400025a5542cf03b9b39e968fe03ffe756eb12d848"
	Oct 18 09:46:23 default-k8s-diff-port-942905 kubelet[720]: E1018 09:46:23.538030     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jl2v_kubernetes-dashboard(f38fa20f-fbae-4afa-a11e-2ad189b49cb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v" podUID="f38fa20f-fbae-4afa-a11e-2ad189b49cb5"
	Oct 18 09:46:36 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:36.985279     720 scope.go:117] "RemoveContainer" containerID="29092e2680b6df022dc167400025a5542cf03b9b39e968fe03ffe756eb12d848"
	Oct 18 09:46:37 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:37.093499     720 scope.go:117] "RemoveContainer" containerID="29092e2680b6df022dc167400025a5542cf03b9b39e968fe03ffe756eb12d848"
	Oct 18 09:46:37 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:37.094087     720 scope.go:117] "RemoveContainer" containerID="9848de10f90a2d3cc6f9a9ec6b8480bcd5c643cec64a7b84c85b4a4650106520"
	Oct 18 09:46:37 default-k8s-diff-port-942905 kubelet[720]: E1018 09:46:37.095236     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jl2v_kubernetes-dashboard(f38fa20f-fbae-4afa-a11e-2ad189b49cb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v" podUID="f38fa20f-fbae-4afa-a11e-2ad189b49cb5"
	Oct 18 09:46:43 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:43.117858     720 scope.go:117] "RemoveContainer" containerID="8a0116addb51231bf7c34dcf64ceb1af03ae8d72cdc1f69232c2ab4b0af736d4"
	Oct 18 09:46:43 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:43.538185     720 scope.go:117] "RemoveContainer" containerID="9848de10f90a2d3cc6f9a9ec6b8480bcd5c643cec64a7b84c85b4a4650106520"
	Oct 18 09:46:43 default-k8s-diff-port-942905 kubelet[720]: E1018 09:46:43.538375     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jl2v_kubernetes-dashboard(f38fa20f-fbae-4afa-a11e-2ad189b49cb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v" podUID="f38fa20f-fbae-4afa-a11e-2ad189b49cb5"
	Oct 18 09:46:54 default-k8s-diff-port-942905 kubelet[720]: I1018 09:46:54.985076     720 scope.go:117] "RemoveContainer" containerID="9848de10f90a2d3cc6f9a9ec6b8480bcd5c643cec64a7b84c85b4a4650106520"
	Oct 18 09:46:54 default-k8s-diff-port-942905 kubelet[720]: E1018 09:46:54.985708     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9jl2v_kubernetes-dashboard(f38fa20f-fbae-4afa-a11e-2ad189b49cb5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9jl2v" podUID="f38fa20f-fbae-4afa-a11e-2ad189b49cb5"
	Oct 18 09:47:03 default-k8s-diff-port-942905 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 18 09:47:03 default-k8s-diff-port-942905 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 18 09:47:03 default-k8s-diff-port-942905 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 18 09:47:03 default-k8s-diff-port-942905 systemd[1]: kubelet.service: Consumed 1.752s CPU time.
	
	
	==> kubernetes-dashboard [82abe805433defecdbe599791f5d38a0c1802aefc0033670a50919ab6805830e] <==
	2025/10/18 09:46:22 Using namespace: kubernetes-dashboard
	2025/10/18 09:46:22 Using in-cluster config to connect to apiserver
	2025/10/18 09:46:22 Using secret token for csrf signing
	2025/10/18 09:46:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/18 09:46:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/18 09:46:22 Successful initial request to the apiserver, version: v1.34.1
	2025/10/18 09:46:22 Generating JWE encryption key
	2025/10/18 09:46:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/18 09:46:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/18 09:46:22 Initializing JWE encryption key from synchronized object
	2025/10/18 09:46:22 Creating in-cluster Sidecar client
	2025/10/18 09:46:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:46:22 Serving insecurely on HTTP port: 9090
	2025/10/18 09:46:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/18 09:46:22 Starting overwatch
	
	
	==> storage-provisioner [8a0116addb51231bf7c34dcf64ceb1af03ae8d72cdc1f69232c2ab4b0af736d4] <==
	I1018 09:46:12.362455       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1018 09:46:42.364255       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e8f6f2f0c69081c9e986e1cb8b40c8a12a4d8785f7a80ae9aa9cfc74ccc81d3d] <==
	I1018 09:46:43.174717       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1018 09:46:43.184771       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1018 09:46:43.184864       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1018 09:46:43.187311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:46.642893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:50.904427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:54.503129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:46:57.557090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:00.579280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:00.584936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:47:00.585101       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1018 09:47:00.585248       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-942905_22024dc6-4df0-48db-8a01-8064aa87ecad!
	I1018 09:47:00.585224       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fc0e8d2d-9133-4c3a-bcf4-257c6fc89570", APIVersion:"v1", ResourceVersion:"671", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-942905_22024dc6-4df0-48db-8a01-8064aa87ecad became leader
	W1018 09:47:00.587806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:00.590844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1018 09:47:00.685482       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-942905_22024dc6-4df0-48db-8a01-8064aa87ecad!
	W1018 09:47:02.594879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:02.599995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:04.603516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:04.607397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:06.617463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:06.643154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:08.646838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1018 09:47:08.652488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-942905 -n default-k8s-diff-port-942905
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-942905 -n default-k8s-diff-port-942905: exit status 2 (361.011996ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-942905 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.10s)
E1018 09:48:36.945406  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (264/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 11.72
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.2
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 11.01
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.06
18 TestDownloadOnly/v1.34.1/DeleteAll 0.2
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.36
21 TestBinaryMirror 0.79
22 TestOffline 58.05
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 164.59
31 TestAddons/serial/GCPAuth/Namespaces 0.15
32 TestAddons/serial/GCPAuth/FakeCredentials 8.42
48 TestAddons/StoppedEnableDisable 16.63
49 TestCertOptions 30.33
50 TestCertExpiration 218.29
52 TestForceSystemdFlag 27.82
53 TestForceSystemdEnv 39.22
55 TestKVMDriverInstallOrUpdate 0.67
59 TestErrorSpam/setup 21.86
60 TestErrorSpam/start 0.62
61 TestErrorSpam/status 0.89
62 TestErrorSpam/pause 6.28
63 TestErrorSpam/unpause 5.75
64 TestErrorSpam/stop 8.01
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 41.61
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.51
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.07
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.79
76 TestFunctional/serial/CacheCmd/cache/add_local 1.78
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 5.65
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.1
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 66.18
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.21
87 TestFunctional/serial/LogsFileCmd 1.25
88 TestFunctional/serial/InvalidService 4.8
90 TestFunctional/parallel/ConfigCmd 0.37
91 TestFunctional/parallel/DashboardCmd 6.95
92 TestFunctional/parallel/DryRun 0.42
93 TestFunctional/parallel/InternationalLanguage 0.19
94 TestFunctional/parallel/StatusCmd 1.11
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 25.21
102 TestFunctional/parallel/SSHCmd 0.52
103 TestFunctional/parallel/CpCmd 1.78
104 TestFunctional/parallel/MySQL 17.39
105 TestFunctional/parallel/FileSync 0.28
106 TestFunctional/parallel/CertSync 1.56
110 TestFunctional/parallel/NodeLabels 0.08
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
114 TestFunctional/parallel/License 0.31
116 TestFunctional/parallel/Version/short 0.05
117 TestFunctional/parallel/Version/components 0.56
118 TestFunctional/parallel/ImageCommands/ImageListShort 1.87
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
122 TestFunctional/parallel/ImageCommands/ImageBuild 3.39
123 TestFunctional/parallel/ImageCommands/Setup 1.73
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
130 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
131 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.21
136 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
137 TestFunctional/parallel/ProfileCmd/profile_list 0.43
138 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
139 TestFunctional/parallel/UpdateContextCmd/no_changes 0.39
140 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
141 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
148 TestFunctional/parallel/MountCmd/any-port 7.8
149 TestFunctional/parallel/MountCmd/specific-port 1.79
150 TestFunctional/parallel/MountCmd/VerifyCleanup 2
151 TestFunctional/parallel/ServiceCmd/List 1.69
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.68
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 153.33
164 TestMultiControlPlane/serial/DeployApp 5.05
165 TestMultiControlPlane/serial/PingHostFromPods 0.94
166 TestMultiControlPlane/serial/AddWorkerNode 24.18
167 TestMultiControlPlane/serial/NodeLabels 0.07
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
169 TestMultiControlPlane/serial/CopyFile 16.34
170 TestMultiControlPlane/serial/StopSecondaryNode 14.22
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
172 TestMultiControlPlane/serial/RestartSecondaryNode 14.64
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 119.51
175 TestMultiControlPlane/serial/DeleteSecondaryNode 10.5
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
177 TestMultiControlPlane/serial/StopCluster 43.38
178 TestMultiControlPlane/serial/RestartCluster 52.76
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.7
180 TestMultiControlPlane/serial/AddSecondaryNode 54.81
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.86
185 TestJSONOutput/start/Command 36.89
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.91
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.2
210 TestKicCustomNetwork/create_custom_network 35.77
211 TestKicCustomNetwork/use_default_bridge_network 23.98
212 TestKicExistingNetwork 23.65
213 TestKicCustomSubnet 25.66
214 TestKicStaticIP 27.3
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 47.53
219 TestMountStart/serial/StartWithMountFirst 8.75
220 TestMountStart/serial/VerifyMountFirst 0.25
221 TestMountStart/serial/StartWithMountSecond 5.82
222 TestMountStart/serial/VerifyMountSecond 0.25
223 TestMountStart/serial/DeleteFirst 1.69
224 TestMountStart/serial/VerifyMountPostDelete 0.25
225 TestMountStart/serial/Stop 1.24
226 TestMountStart/serial/RestartStopped 7.77
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 92.28
231 TestMultiNode/serial/DeployApp2Nodes 4.57
232 TestMultiNode/serial/PingHostFrom2Pods 0.64
233 TestMultiNode/serial/AddNode 24.12
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.64
236 TestMultiNode/serial/CopyFile 9.39
237 TestMultiNode/serial/StopNode 2.21
238 TestMultiNode/serial/StartAfterStop 7.09
239 TestMultiNode/serial/RestartKeepsNodes 76.15
240 TestMultiNode/serial/DeleteNode 5.2
241 TestMultiNode/serial/StopMultiNode 28.43
242 TestMultiNode/serial/RestartMultiNode 50.73
243 TestMultiNode/serial/ValidateNameConflict 24.86
248 TestPreload 93.58
250 TestScheduledStopUnix 97.33
253 TestInsufficientStorage 10.11
254 TestRunningBinaryUpgrade 55.06
256 TestKubernetesUpgrade 297.71
257 TestMissingContainerUpgrade 77.97
259 TestStoppedBinaryUpgrade/Setup 3.04
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
261 TestNoKubernetes/serial/StartWithK8s 37.16
262 TestStoppedBinaryUpgrade/Upgrade 74.13
263 TestNoKubernetes/serial/StartWithStopK8s 17.15
264 TestNoKubernetes/serial/Start 7.88
273 TestPause/serial/Start 39.97
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
275 TestNoKubernetes/serial/ProfileList 1.86
276 TestNoKubernetes/serial/Stop 1.28
277 TestNoKubernetes/serial/StartNoArgs 7.28
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
282 TestStoppedBinaryUpgrade/MinikubeLogs 1.13
287 TestNetworkPlugins/group/false 3.36
291 TestPause/serial/SecondStartNoReconfiguration 8.13
294 TestStartStop/group/old-k8s-version/serial/FirstStart 53.49
296 TestStartStop/group/no-preload/serial/FirstStart 51.76
297 TestStartStop/group/old-k8s-version/serial/DeployApp 9.23
299 TestStartStop/group/no-preload/serial/DeployApp 9.24
300 TestStartStop/group/old-k8s-version/serial/Stop 16.03
302 TestStartStop/group/no-preload/serial/Stop 18.06
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
304 TestStartStop/group/old-k8s-version/serial/SecondStart 29.35
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
306 TestStartStop/group/no-preload/serial/SecondStart 43.58
307 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
309 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
312 TestStartStop/group/embed-certs/serial/FirstStart 39.83
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
315 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.18
320 TestStartStop/group/newest-cni/serial/FirstStart 30.24
321 TestStartStop/group/embed-certs/serial/DeployApp 8.25
323 TestStartStop/group/embed-certs/serial/Stop 18.18
324 TestStartStop/group/newest-cni/serial/DeployApp 0
326 TestStartStop/group/newest-cni/serial/Stop 12.42
327 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
328 TestStartStop/group/embed-certs/serial/SecondStart 51.21
329 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.25
330 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
331 TestStartStop/group/newest-cni/serial/SecondStart 11.8
333 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.62
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
338 TestNetworkPlugins/group/auto/Start 39.82
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 48.77
341 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
342 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
343 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
345 TestNetworkPlugins/group/auto/KubeletFlags 0.33
346 TestNetworkPlugins/group/auto/NetCatPod 8.3
347 TestNetworkPlugins/group/kindnet/Start 41.47
348 TestNetworkPlugins/group/auto/DNS 0.12
349 TestNetworkPlugins/group/auto/Localhost 0.1
350 TestNetworkPlugins/group/auto/HairPin 0.1
351 TestNetworkPlugins/group/calico/Start 55.31
352 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
353 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
354 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
356 TestNetworkPlugins/group/custom-flannel/Start 53.38
357 TestNetworkPlugins/group/enable-default-cni/Start 64.3
358 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
359 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
360 TestNetworkPlugins/group/kindnet/NetCatPod 9.24
361 TestNetworkPlugins/group/kindnet/DNS 0.11
362 TestNetworkPlugins/group/kindnet/Localhost 0.11
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/kindnet/HairPin 0.11
365 TestNetworkPlugins/group/calico/KubeletFlags 0.29
366 TestNetworkPlugins/group/calico/NetCatPod 9.22
367 TestNetworkPlugins/group/calico/DNS 0.12
368 TestNetworkPlugins/group/calico/Localhost 0.09
369 TestNetworkPlugins/group/calico/HairPin 0.09
370 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
371 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.18
372 TestNetworkPlugins/group/flannel/Start 54.37
373 TestNetworkPlugins/group/custom-flannel/DNS 0.12
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
376 TestNetworkPlugins/group/bridge/Start 67.53
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.23
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
384 TestNetworkPlugins/group/flannel/NetCatPod 8.17
385 TestNetworkPlugins/group/flannel/DNS 0.11
386 TestNetworkPlugins/group/flannel/Localhost 0.1
387 TestNetworkPlugins/group/flannel/HairPin 0.09
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
389 TestNetworkPlugins/group/bridge/NetCatPod 9.19
390 TestNetworkPlugins/group/bridge/DNS 0.1
391 TestNetworkPlugins/group/bridge/Localhost 0.08
392 TestNetworkPlugins/group/bridge/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (11.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-429693 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-429693 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.724202397s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (11.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1018 08:58:12.054184  134611 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1018 08:58:12.054294  134611 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-429693
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-429693: exit status 85 (60.168144ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-429693 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-429693 │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:58:00
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:58:00.370753  134623 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:58:00.371054  134623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:58:00.371065  134623 out.go:374] Setting ErrFile to fd 2...
	I1018 08:58:00.371070  134623 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:58:00.371322  134623 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	W1018 08:58:00.371501  134623 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21764-131066/.minikube/config/config.json: open /home/jenkins/minikube-integration/21764-131066/.minikube/config/config.json: no such file or directory
	I1018 08:58:00.372101  134623 out.go:368] Setting JSON to true
	I1018 08:58:00.373093  134623 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2424,"bootTime":1760775456,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:58:00.373205  134623 start.go:141] virtualization: kvm guest
	I1018 08:58:00.375357  134623 out.go:99] [download-only-429693] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1018 08:58:00.375501  134623 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball: no such file or directory
	I1018 08:58:00.375556  134623 notify.go:220] Checking for updates...
	I1018 08:58:00.376729  134623 out.go:171] MINIKUBE_LOCATION=21764
	I1018 08:58:00.378054  134623 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:58:00.379348  134623 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 08:58:00.380528  134623 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 08:58:00.381739  134623 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1018 08:58:00.383671  134623 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 08:58:00.383977  134623 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:58:00.406743  134623 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 08:58:00.406904  134623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:58:00.463883  134623 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-18 08:58:00.453766022 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:58:00.463995  134623 docker.go:318] overlay module found
	I1018 08:58:00.465586  134623 out.go:99] Using the docker driver based on user configuration
	I1018 08:58:00.465620  134623 start.go:305] selected driver: docker
	I1018 08:58:00.465630  134623 start.go:925] validating driver "docker" against <nil>
	I1018 08:58:00.465726  134623 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:58:00.521083  134623 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-18 08:58:00.511470762 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:58:00.521265  134623 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 08:58:00.521810  134623 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1018 08:58:00.521999  134623 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 08:58:00.523497  134623 out.go:171] Using Docker driver with root privileges
	I1018 08:58:00.524547  134623 cni.go:84] Creating CNI manager for ""
	I1018 08:58:00.524609  134623 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:58:00.524621  134623 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 08:58:00.524678  134623 start.go:349] cluster config:
	{Name:download-only-429693 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-429693 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:58:00.525905  134623 out.go:99] Starting "download-only-429693" primary control-plane node in "download-only-429693" cluster
	I1018 08:58:00.525927  134623 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 08:58:00.526976  134623 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1018 08:58:00.527008  134623 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 08:58:00.527164  134623 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 08:58:00.543932  134623 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 08:58:00.544129  134623 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 08:58:00.544224  134623 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 08:58:00.627755  134623 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1018 08:58:00.627793  134623 cache.go:58] Caching tarball of preloaded images
	I1018 08:58:00.627969  134623 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1018 08:58:00.629541  134623 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1018 08:58:00.629556  134623 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1018 08:58:00.728699  134623 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1018 08:58:00.728896  134623 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1018 08:58:05.059020  134623 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	
	
	* The control-plane node download-only-429693 host does not exist
	  To start a cluster, run: "minikube start -p download-only-429693"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-429693
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-234186 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-234186 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.008656214s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1018 08:58:23.456479  134611 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1018 08:58:23.456530  134611 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-234186
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-234186: exit status 85 (59.752771ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-429693 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-429693 │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │ 18 Oct 25 08:58 UTC │
	│ delete  │ -p download-only-429693                                                                                                                                                   │ download-only-429693 │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │ 18 Oct 25 08:58 UTC │
	│ start   │ -o=json --download-only -p download-only-234186 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-234186 │ jenkins │ v1.37.0 │ 18 Oct 25 08:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/18 08:58:12
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1018 08:58:12.486967  135008 out.go:360] Setting OutFile to fd 1 ...
	I1018 08:58:12.487079  135008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:58:12.487085  135008 out.go:374] Setting ErrFile to fd 2...
	I1018 08:58:12.487090  135008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 08:58:12.487308  135008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 08:58:12.487883  135008 out.go:368] Setting JSON to true
	I1018 08:58:12.488722  135008 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2436,"bootTime":1760775456,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 08:58:12.488807  135008 start.go:141] virtualization: kvm guest
	I1018 08:58:12.490843  135008 out.go:99] [download-only-234186] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 08:58:12.490997  135008 notify.go:220] Checking for updates...
	I1018 08:58:12.492312  135008 out.go:171] MINIKUBE_LOCATION=21764
	I1018 08:58:12.493636  135008 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 08:58:12.494881  135008 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 08:58:12.495951  135008 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 08:58:12.497034  135008 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1018 08:58:12.499057  135008 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1018 08:58:12.499306  135008 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 08:58:12.521436  135008 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 08:58:12.521514  135008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:58:12.573289  135008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-18 08:58:12.563834618 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:58:12.573425  135008 docker.go:318] overlay module found
	I1018 08:58:12.574975  135008 out.go:99] Using the docker driver based on user configuration
	I1018 08:58:12.575002  135008 start.go:305] selected driver: docker
	I1018 08:58:12.575007  135008 start.go:925] validating driver "docker" against <nil>
	I1018 08:58:12.575096  135008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 08:58:12.630168  135008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-18 08:58:12.621192488 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 08:58:12.630385  135008 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1018 08:58:12.631133  135008 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1018 08:58:12.631323  135008 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1018 08:58:12.632766  135008 out.go:171] Using Docker driver with root privileges
	I1018 08:58:12.633723  135008 cni.go:84] Creating CNI manager for ""
	I1018 08:58:12.633794  135008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1018 08:58:12.633807  135008 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1018 08:58:12.633882  135008 start.go:349] cluster config:
	{Name:download-only-234186 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-234186 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 08:58:12.635064  135008 out.go:99] Starting "download-only-234186" primary control-plane node in "download-only-234186" cluster
	I1018 08:58:12.635085  135008 cache.go:123] Beginning downloading kic base image for docker with crio
	I1018 08:58:12.636145  135008 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1018 08:58:12.636171  135008 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:58:12.636279  135008 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1018 08:58:12.652260  135008 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1018 08:58:12.652384  135008 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1018 08:58:12.652401  135008 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1018 08:58:12.652406  135008 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1018 08:58:12.652416  135008 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1018 08:58:12.987655  135008 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1018 08:58:12.987686  135008 cache.go:58] Caching tarball of preloaded images
	I1018 08:58:12.987922  135008 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1018 08:58:12.989433  135008 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1018 08:58:12.989448  135008 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1018 08:58:13.090685  135008 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1018 08:58:13.090732  135008 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21764-131066/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-234186 host does not exist
	  To start a cluster, run: "minikube start -p download-only-234186"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-234186
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.36s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-014677 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-014677" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-014677
--- PASS: TestDownloadOnlyKic (0.36s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I1018 08:58:24.474300  134611 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-818527 --alsologtostderr --binary-mirror http://127.0.0.1:41249 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-818527" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-818527
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestOffline (58.05s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-632094 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-632094 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (55.535817373s)
helpers_test.go:175: Cleaning up "offline-crio-632094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-632094
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-632094: (2.512071643s)
--- PASS: TestOffline (58.05s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-222746
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-222746: exit status 85 (50.246147ms)

                                                
                                                
-- stdout --
	* Profile "addons-222746" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-222746"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-222746
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-222746: exit status 85 (49.103183ms)

                                                
                                                
-- stdout --
	* Profile "addons-222746" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-222746"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (164.59s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-222746 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-222746 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m44.590112323s)
--- PASS: TestAddons/Setup (164.59s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-222746 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-222746 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.42s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-222746 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-222746 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5fc7c677-a2c0-4ad1-91d2-05d5bef7fde7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5fc7c677-a2c0-4ad1-91d2-05d5bef7fde7] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003890414s
addons_test.go:694: (dbg) Run:  kubectl --context addons-222746 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-222746 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-222746 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.63s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-222746
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-222746: (16.380755709s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-222746
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-222746
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-222746
--- PASS: TestAddons/StoppedEnableDisable (16.63s)

                                                
                                    
x
+
TestCertOptions (30.33s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-310417 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-310417 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (27.15547954s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-310417 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-310417 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-310417 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-310417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-310417
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-310417: (2.437406206s)
--- PASS: TestCertOptions (30.33s)

                                                
                                    
x
+
TestCertExpiration (218.29s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-650496 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-650496 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (29.119758063s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-650496 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-650496 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.542020897s)
helpers_test.go:175: Cleaning up "cert-expiration-650496" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-650496
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-650496: (2.625942654s)
--- PASS: TestCertExpiration (218.29s)

                                                
                                    
x
+
TestForceSystemdFlag (27.82s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-565668 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-565668 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.319102926s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-565668 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-565668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-565668
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-565668: (4.203093851s)
--- PASS: TestForceSystemdFlag (27.82s)

                                                
                                    
x
+
TestForceSystemdEnv (39.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-678647 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-678647 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.721053985s)
helpers_test.go:175: Cleaning up "force-systemd-env-678647" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-678647
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-678647: (2.50059368s)
--- PASS: TestForceSystemdEnv (39.22s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.67s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1018 09:41:41.641081  134611 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1018 09:41:41.641229  134611 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1838962029/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 09:41:41.674920  134611 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1838962029/001/docker-machine-driver-kvm2 version is 1.1.1
W1018 09:41:41.674964  134611 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W1018 09:41:41.675087  134611 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1018 09:41:41.675134  134611 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate1838962029/001/docker-machine-driver-kvm2
I1018 09:41:42.161033  134611 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate1838962029/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1018 09:41:42.176947  134611 install.go:163] /tmp/TestKVMDriverInstallOrUpdate1838962029/001/docker-machine-driver-kvm2 version is 1.37.0
--- PASS: TestKVMDriverInstallOrUpdate (0.67s)

                                                
                                    
x
+
TestErrorSpam/setup (21.86s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-813872 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-813872 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-813872 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-813872 --driver=docker  --container-runtime=crio: (21.862308849s)
--- PASS: TestErrorSpam/setup (21.86s)

                                                
                                    
x
+
TestErrorSpam/start (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 start --dry-run
--- PASS: TestErrorSpam/start (0.62s)

                                                
                                    
x
+
TestErrorSpam/status (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 status
--- PASS: TestErrorSpam/status (0.89s)

                                                
                                    
x
+
TestErrorSpam/pause (6.28s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 pause: exit status 80 (1.618166273s)

                                                
                                                
-- stdout --
	* Pausing node nospam-813872 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:04:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 pause: exit status 80 (2.431330291s)

                                                
                                                
-- stdout --
	* Pausing node nospam-813872 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:04:51Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 pause: exit status 80 (2.22576782s)

                                                
                                                
-- stdout --
	* Pausing node nospam-813872 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:04:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.28s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 unpause: exit status 80 (1.962503903s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-813872 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:04:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 unpause: exit status 80 (2.118232856s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-813872 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:04:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 unpause: exit status 80 (1.672501971s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-813872 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-18T09:04:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.75s)

                                                
                                    
x
+
TestErrorSpam/stop (8.01s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 stop: (7.834358277s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-813872 --log_dir /tmp/nospam-813872 stop
--- PASS: TestErrorSpam/stop (8.01s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21764-131066/.minikube/files/etc/test/nested/copy/134611/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.61s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-622052 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-622052 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (41.605041892s)
--- PASS: TestFunctional/serial/StartWithProxy (41.61s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.51s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1018 09:05:54.908595  134611 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-622052 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-622052 --alsologtostderr -v=8: (6.50865788s)
functional_test.go:678: soft start took 6.510414579s for "functional-622052" cluster.
I1018 09:06:01.418755  134611 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.51s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-622052 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-622052 cache add registry.k8s.io/pause:3.3: (1.014717079s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-622052 /tmp/TestFunctionalserialCacheCmdcacheadd_local1713148370/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 cache add minikube-local-cache-test:functional-622052
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-622052 cache add minikube-local-cache-test:functional-622052: (1.464124397s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 cache delete minikube-local-cache-test:functional-622052
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-622052
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (5.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622052 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (267.829898ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 cache reload
E1018 09:06:10.474006  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:06:10.480398  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:06:10.491817  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:06:10.513226  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:06:10.554691  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:06:10.636150  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:06:10.797661  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:06:11.119413  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:06:11.761117  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-622052 cache reload: (4.821479459s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (5.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 kubectl -- --context functional-622052 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-622052 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (66.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-622052 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1018 09:06:13.043030  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:06:15.606117  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:06:20.728054  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:06:30.969562  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:06:51.451104  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-622052 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m6.177105378s)
functional_test.go:776: restart took 1m6.177250875s for "functional-622052" cluster.
I1018 09:07:18.620193  134611 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (66.18s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-622052 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-622052 logs: (1.206269061s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 logs --file /tmp/TestFunctionalserialLogsFileCmd2883443414/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-622052 logs --file /tmp/TestFunctionalserialLogsFileCmd2883443414/001/logs.txt: (1.245934081s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.8s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-622052 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-622052
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-622052: exit status 115 (332.293737ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32148 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-622052 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-622052 delete -f testdata/invalidsvc.yaml: (1.301250684s)
--- PASS: TestFunctional/serial/InvalidService (4.80s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622052 config get cpus: exit status 14 (74.294604ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622052 config get cpus: exit status 14 (51.280264ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-622052 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-622052 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 168339: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.95s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-622052 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-622052 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (181.555643ms)

                                                
                                                
-- stdout --
	* [functional-622052] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:07:26.544168  167123 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:07:26.545044  167123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:07:26.545071  167123 out.go:374] Setting ErrFile to fd 2...
	I1018 09:07:26.545078  167123 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:07:26.545520  167123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:07:26.546453  167123 out.go:368] Setting JSON to false
	I1018 09:07:26.547579  167123 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2991,"bootTime":1760775456,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:07:26.547671  167123 start.go:141] virtualization: kvm guest
	I1018 09:07:26.549164  167123 out.go:179] * [functional-622052] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:07:26.551318  167123 notify.go:220] Checking for updates...
	I1018 09:07:26.551329  167123 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:07:26.552452  167123 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:07:26.553847  167123 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:07:26.555070  167123 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:07:26.559014  167123 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:07:26.560319  167123 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:07:26.562008  167123 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:07:26.562696  167123 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:07:26.592379  167123 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:07:26.592496  167123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:07:26.670457  167123 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-18 09:07:26.657502071 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:07:26.670594  167123 docker.go:318] overlay module found
	I1018 09:07:26.672243  167123 out.go:179] * Using the docker driver based on existing profile
	I1018 09:07:26.673416  167123 start.go:305] selected driver: docker
	I1018 09:07:26.673436  167123 start.go:925] validating driver "docker" against &{Name:functional-622052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-622052 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:07:26.673559  167123 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:07:26.675422  167123 out.go:203] 
	W1018 09:07:26.676591  167123 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1018 09:07:26.677890  167123 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-622052 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-622052 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-622052 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (189.494903ms)

                                                
                                                
-- stdout --
	* [functional-622052] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:07:26.378919  166894 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:07:26.379043  166894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:07:26.379050  166894 out.go:374] Setting ErrFile to fd 2...
	I1018 09:07:26.379057  166894 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:07:26.379524  166894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:07:26.380161  166894 out.go:368] Setting JSON to false
	I1018 09:07:26.382725  166894 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2990,"bootTime":1760775456,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:07:26.382893  166894 start.go:141] virtualization: kvm guest
	I1018 09:07:26.385361  166894 out.go:179] * [functional-622052] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1018 09:07:26.386899  166894 notify.go:220] Checking for updates...
	I1018 09:07:26.387309  166894 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:07:26.389672  166894 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:07:26.391663  166894 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:07:26.392845  166894 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:07:26.393986  166894 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:07:26.398051  166894 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:07:26.399740  166894 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:07:26.400413  166894 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:07:26.428164  166894 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:07:26.428289  166894 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:07:26.488813  166894 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-18 09:07:26.47947018 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:07:26.488945  166894 docker.go:318] overlay module found
	I1018 09:07:26.492216  166894 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1018 09:07:26.493941  166894 start.go:305] selected driver: docker
	I1018 09:07:26.493965  166894 start.go:925] validating driver "docker" against &{Name:functional-622052 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-622052 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1018 09:07:26.494097  166894 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:07:26.495917  166894 out.go:203] 
	W1018 09:07:26.497050  166894 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1018 09:07:26.498013  166894 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [000af4bf-26ca-4345-92fd-027486f8b766] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004060872s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-622052 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-622052 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-622052 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-622052 apply -f testdata/storage-provisioner/pod.yaml
I1018 09:07:33.442878  134611 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [f0e51227-d559-407a-bea0-e04f1302672d] Pending
2025/10/18 09:07:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "sp-pod" [f0e51227-d559-407a-bea0-e04f1302672d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [f0e51227-d559-407a-bea0-e04f1302672d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003541395s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-622052 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-622052 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-622052 apply -f testdata/storage-provisioner/pod.yaml
I1018 09:07:45.122676  134611 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7bd3faff-546f-4795-8af6-23fd45c80c34] Pending
helpers_test.go:352: "sp-pod" [7bd3faff-546f-4795-8af6-23fd45c80c34] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [7bd3faff-546f-4795-8af6-23fd45c80c34] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.003338988s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-622052 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.21s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh -n functional-622052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 cp functional-622052:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4105975265/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh -n functional-622052 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh -n functional-622052 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (17.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-622052 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-w9gsr" [553c0252-a8a1-4885-b7fa-1ce9158e7dee] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-w9gsr" [553c0252-a8a1-4885-b7fa-1ce9158e7dee] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.003097046s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-622052 exec mysql-5bb876957f-w9gsr -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-622052 exec mysql-5bb876957f-w9gsr -- mysql -ppassword -e "show databases;": exit status 1 (85.319052ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 09:08:07.475928  134611 retry.go:31] will retry after 1.085117797s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-622052 exec mysql-5bb876957f-w9gsr -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-622052 exec mysql-5bb876957f-w9gsr -- mysql -ppassword -e "show databases;": exit status 1 (85.520958ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1018 09:08:08.646930  134611 retry.go:31] will retry after 1.880503468s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-622052 exec mysql-5bb876957f-w9gsr -- mysql -ppassword -e "show databases;"
E1018 09:08:54.335073  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:11:10.473119  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:11:38.177319  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:16:10.472898  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (17.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/134611/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "sudo cat /etc/test/nested/copy/134611/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/134611.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "sudo cat /etc/ssl/certs/134611.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/134611.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "sudo cat /usr/share/ca-certificates/134611.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1346112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "sudo cat /etc/ssl/certs/1346112.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1346112.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "sudo cat /usr/share/ca-certificates/1346112.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-622052 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622052 ssh "sudo systemctl is-active docker": exit status 1 (287.816914ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622052 ssh "sudo systemctl is-active containerd": exit status 1 (283.223989ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-622052 image ls --format short --alsologtostderr: (1.872288889s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-622052 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-622052 image ls --format short --alsologtostderr:
I1018 09:07:58.410995  174444 out.go:360] Setting OutFile to fd 1 ...
I1018 09:07:58.411277  174444 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:07:58.411287  174444 out.go:374] Setting ErrFile to fd 2...
I1018 09:07:58.411293  174444 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:07:58.411562  174444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
I1018 09:07:58.412287  174444 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:07:58.412402  174444 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:07:58.412832  174444 cli_runner.go:164] Run: docker container inspect functional-622052 --format={{.State.Status}}
I1018 09:07:58.433996  174444 ssh_runner.go:195] Run: systemctl --version
I1018 09:07:58.434061  174444 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622052
I1018 09:07:58.455153  174444 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/functional-622052/id_rsa Username:docker}
I1018 09:07:58.557696  174444 ssh_runner.go:195] Run: sudo crictl images --output json
I1018 09:08:00.229991  174444 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.672220428s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-622052 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ 07ccdb7838758 │ 164MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ localhost/my-image                      │ functional-622052  │ 1a0adffeab51c │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-622052 image ls --format table --alsologtostderr:
I1018 09:08:04.097293  175311 out.go:360] Setting OutFile to fd 1 ...
I1018 09:08:04.097558  175311 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:08:04.097570  175311 out.go:374] Setting ErrFile to fd 2...
I1018 09:08:04.097576  175311 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:08:04.097813  175311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
I1018 09:08:04.098419  175311 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:08:04.098534  175311 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:08:04.098983  175311 cli_runner.go:164] Run: docker container inspect functional-622052 --format={{.State.Status}}
I1018 09:08:04.117681  175311 ssh_runner.go:195] Run: systemctl --version
I1018 09:08:04.117747  175311 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622052
I1018 09:08:04.136694  175311 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/functional-622052/id_rsa Username:docker}
I1018 09:08:04.231372  175311 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-622052 image ls --format json --alsologtostderr:
[{"id":"1a0adffeab51c258cb080dba8d80ceab96738656a51ca0ae60c0ba62d02bb45c","repoDigests":["localhost/my-image@sha256:ab746b6186f4387a5591ae8d7330b2a25c5fa5eced19622aa7d509019fc2b5f2"],"repoTags":["localhost/my-image:functional-622052"],"size":"1468744"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha25
6:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":[
"docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe163
79ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"4a3a0de68b3a86
9097e5c63071e170dba3a3d3aa631ea8585bb81159fbd5712c","repoDigests":["docker.io/library/9191ecf0fc5a23ae4bd51d46b4b5f63eb909348777db0e549cbe1d56a38ac999-tmp@sha256:1320ea31d4449d9a57f37dd51e8fe31f751f5781424ad4e24bb12e68e09332fb"],"repoTags":[],"size":"1466132"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sh
a256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938","repoDigests":["docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7dd
ff32f81cd6be4614d07240ec115","docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6"],"repoTags":["docker.io/library/nginx:latest"],"size":"163615579"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-622052 image ls --format json --alsologtostderr:
I1018 09:08:03.888288  175257 out.go:360] Setting OutFile to fd 1 ...
I1018 09:08:03.888578  175257 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:08:03.888602  175257 out.go:374] Setting ErrFile to fd 2...
I1018 09:08:03.888609  175257 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:08:03.888841  175257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
I1018 09:08:03.889406  175257 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:08:03.889518  175257 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:08:03.890085  175257 cli_runner.go:164] Run: docker container inspect functional-622052 --format={{.State.Status}}
I1018 09:08:03.907408  175257 ssh_runner.go:195] Run: systemctl --version
I1018 09:08:03.907454  175257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622052
I1018 09:08:03.924184  175257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/functional-622052/id_rsa Username:docker}
I1018 09:08:04.018589  175257 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-622052 image ls --format yaml --alsologtostderr:
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 07ccdb7838758e758a4d52a9761636c385125a327355c0c94a6acff9babff938
repoDigests:
- docker.io/library/nginx@sha256:35fabd32a7582bed5da0a40f41fd4984df7ddff32f81cd6be4614d07240ec115
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
repoTags:
- docker.io/library/nginx:latest
size: "163615579"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-622052 image ls --format yaml --alsologtostderr:
I1018 09:08:00.282348  174541 out.go:360] Setting OutFile to fd 1 ...
I1018 09:08:00.282643  174541 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:08:00.282656  174541 out.go:374] Setting ErrFile to fd 2...
I1018 09:08:00.282662  174541 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:08:00.282954  174541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
I1018 09:08:00.283731  174541 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:08:00.283890  174541 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:08:00.284409  174541 cli_runner.go:164] Run: docker container inspect functional-622052 --format={{.State.Status}}
I1018 09:08:00.303300  174541 ssh_runner.go:195] Run: systemctl --version
I1018 09:08:00.303352  174541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622052
I1018 09:08:00.322261  174541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/functional-622052/id_rsa Username:docker}
I1018 09:08:00.418397  174541 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622052 ssh pgrep buildkitd: exit status 1 (263.704219ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 image build -t localhost/my-image:functional-622052 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-622052 image build -t localhost/my-image:functional-622052 testdata/build --alsologtostderr: (2.911960563s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-622052 image build -t localhost/my-image:functional-622052 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 4a3a0de68b3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-622052
--> 1a0adffeab5
Successfully tagged localhost/my-image:functional-622052
1a0adffeab51c258cb080dba8d80ceab96738656a51ca0ae60c0ba62d02bb45c
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-622052 image build -t localhost/my-image:functional-622052 testdata/build --alsologtostderr:
I1018 09:08:00.766277  174759 out.go:360] Setting OutFile to fd 1 ...
I1018 09:08:00.766400  174759 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:08:00.766409  174759 out.go:374] Setting ErrFile to fd 2...
I1018 09:08:00.766413  174759 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1018 09:08:00.766596  174759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
I1018 09:08:00.767204  174759 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:08:00.767879  174759 config.go:182] Loaded profile config "functional-622052": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1018 09:08:00.768335  174759 cli_runner.go:164] Run: docker container inspect functional-622052 --format={{.State.Status}}
I1018 09:08:00.786987  174759 ssh_runner.go:195] Run: systemctl --version
I1018 09:08:00.787032  174759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-622052
I1018 09:08:00.804968  174759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32898 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/functional-622052/id_rsa Username:docker}
I1018 09:08:00.902443  174759 build_images.go:161] Building image from path: /tmp/build.1761150730.tar
I1018 09:08:00.902524  174759 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1018 09:08:00.910493  174759 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1761150730.tar
I1018 09:08:00.914088  174759 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1761150730.tar: stat -c "%s %y" /var/lib/minikube/build/build.1761150730.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1761150730.tar': No such file or directory
I1018 09:08:00.914117  174759 ssh_runner.go:362] scp /tmp/build.1761150730.tar --> /var/lib/minikube/build/build.1761150730.tar (3072 bytes)
I1018 09:08:00.932010  174759 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1761150730
I1018 09:08:00.940210  174759 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1761150730 -xf /var/lib/minikube/build/build.1761150730.tar
I1018 09:08:00.947941  174759 crio.go:315] Building image: /var/lib/minikube/build/build.1761150730
I1018 09:08:00.947992  174759 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-622052 /var/lib/minikube/build/build.1761150730 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1018 09:08:03.607597  174759 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-622052 /var/lib/minikube/build/build.1761150730 --cgroup-manager=cgroupfs: (2.65955401s)
I1018 09:08:03.607681  174759 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1761150730
I1018 09:08:03.615684  174759 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1761150730.tar
I1018 09:08:03.623292  174759 build_images.go:217] Built localhost/my-image:functional-622052 from /tmp/build.1761150730.tar
I1018 09:08:03.623328  174759 build_images.go:133] succeeded building to: functional-622052
I1018 09:08:03.623335  174759 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.707222466s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-622052
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 image rm kicbase/echo-server:functional-622052 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-622052 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-622052 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-622052 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-622052 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 169623: os: process already finished
helpers_test.go:519: unable to terminate pid 169358: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-622052 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-622052 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [20ec9895-7989-47aa-b802-594769960df2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [20ec9895-7989-47aa-b802-594769960df2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003411897s
I1018 09:07:46.008412  134611 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "376.461321ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "57.843249ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "389.48958ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "59.472826ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-622052 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.33.6 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-622052 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-622052 /tmp/TestFunctionalparallelMountCmdany-port3624630862/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760778466172220082" to /tmp/TestFunctionalparallelMountCmdany-port3624630862/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760778466172220082" to /tmp/TestFunctionalparallelMountCmdany-port3624630862/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760778466172220082" to /tmp/TestFunctionalparallelMountCmdany-port3624630862/001/test-1760778466172220082
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622052 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (266.37461ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 09:07:46.438933  134611 retry.go:31] will retry after 652.875221ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 18 09:07 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 18 09:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 18 09:07 test-1760778466172220082
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh cat /mount-9p/test-1760778466172220082
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-622052 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [5fbcefee-0330-4ff1-916b-974236b57bb3] Pending
helpers_test.go:352: "busybox-mount" [5fbcefee-0330-4ff1-916b-974236b57bb3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [5fbcefee-0330-4ff1-916b-974236b57bb3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [5fbcefee-0330-4ff1-916b-974236b57bb3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004233319s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-622052 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-622052 /tmp/TestFunctionalparallelMountCmdany-port3624630862/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-622052 /tmp/TestFunctionalparallelMountCmdspecific-port3282120928/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622052 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (277.234869ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 09:07:54.250952  134611 retry.go:31] will retry after 392.566274ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-622052 /tmp/TestFunctionalparallelMountCmdspecific-port3282120928/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622052 ssh "sudo umount -f /mount-9p": exit status 1 (347.842745ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-622052 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-622052 /tmp/TestFunctionalparallelMountCmdspecific-port3282120928/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-622052 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1256507331/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-622052 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1256507331/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-622052 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1256507331/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-622052 ssh "findmnt -T" /mount1: exit status 1 (419.264732ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1018 09:07:56.179420  134611 retry.go:31] will retry after 610.646202ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-622052 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-622052 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1256507331/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-622052 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1256507331/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-622052 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1256507331/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-622052 service list: (1.687454064s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-622052 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-622052 service list -o json: (1.680944273s)
functional_test.go:1504: Took "1.68104621s" to run "out/minikube-linux-amd64 -p functional-622052 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.68s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-622052
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-622052
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-622052
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (153.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-116280 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m32.637131636s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (153.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-116280 kubectl -- rollout status deployment/busybox: (3.305677517s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- exec busybox-7b57f96db7-2pfrw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- exec busybox-7b57f96db7-747c2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- exec busybox-7b57f96db7-xzbhs -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- exec busybox-7b57f96db7-2pfrw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- exec busybox-7b57f96db7-747c2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- exec busybox-7b57f96db7-xzbhs -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- exec busybox-7b57f96db7-2pfrw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- exec busybox-7b57f96db7-747c2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- exec busybox-7b57f96db7-xzbhs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- exec busybox-7b57f96db7-2pfrw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- exec busybox-7b57f96db7-2pfrw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- exec busybox-7b57f96db7-747c2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- exec busybox-7b57f96db7-747c2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- exec busybox-7b57f96db7-xzbhs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 kubectl -- exec busybox-7b57f96db7-xzbhs -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-116280 node add --alsologtostderr -v 5: (23.312349408s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-116280 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp testdata/cp-test.txt ha-116280:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp ha-116280:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2299874705/001/cp-test_ha-116280.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp ha-116280:/home/docker/cp-test.txt ha-116280-m02:/home/docker/cp-test_ha-116280_ha-116280-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m02 "sudo cat /home/docker/cp-test_ha-116280_ha-116280-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp ha-116280:/home/docker/cp-test.txt ha-116280-m03:/home/docker/cp-test_ha-116280_ha-116280-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m03 "sudo cat /home/docker/cp-test_ha-116280_ha-116280-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp ha-116280:/home/docker/cp-test.txt ha-116280-m04:/home/docker/cp-test_ha-116280_ha-116280-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m04 "sudo cat /home/docker/cp-test_ha-116280_ha-116280-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp testdata/cp-test.txt ha-116280-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp ha-116280-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2299874705/001/cp-test_ha-116280-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp ha-116280-m02:/home/docker/cp-test.txt ha-116280:/home/docker/cp-test_ha-116280-m02_ha-116280.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280 "sudo cat /home/docker/cp-test_ha-116280-m02_ha-116280.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp ha-116280-m02:/home/docker/cp-test.txt ha-116280-m03:/home/docker/cp-test_ha-116280-m02_ha-116280-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m03 "sudo cat /home/docker/cp-test_ha-116280-m02_ha-116280-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp ha-116280-m02:/home/docker/cp-test.txt ha-116280-m04:/home/docker/cp-test_ha-116280-m02_ha-116280-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m04 "sudo cat /home/docker/cp-test_ha-116280-m02_ha-116280-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp testdata/cp-test.txt ha-116280-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp ha-116280-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2299874705/001/cp-test_ha-116280-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp ha-116280-m03:/home/docker/cp-test.txt ha-116280:/home/docker/cp-test_ha-116280-m03_ha-116280.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280 "sudo cat /home/docker/cp-test_ha-116280-m03_ha-116280.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp ha-116280-m03:/home/docker/cp-test.txt ha-116280-m02:/home/docker/cp-test_ha-116280-m03_ha-116280-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m02 "sudo cat /home/docker/cp-test_ha-116280-m03_ha-116280-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp ha-116280-m03:/home/docker/cp-test.txt ha-116280-m04:/home/docker/cp-test_ha-116280-m03_ha-116280-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m04 "sudo cat /home/docker/cp-test_ha-116280-m03_ha-116280-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp testdata/cp-test.txt ha-116280-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp ha-116280-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2299874705/001/cp-test_ha-116280-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp ha-116280-m04:/home/docker/cp-test.txt ha-116280:/home/docker/cp-test_ha-116280-m04_ha-116280.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280 "sudo cat /home/docker/cp-test_ha-116280-m04_ha-116280.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp ha-116280-m04:/home/docker/cp-test.txt ha-116280-m02:/home/docker/cp-test_ha-116280-m04_ha-116280-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m02 "sudo cat /home/docker/cp-test_ha-116280-m04_ha-116280-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 cp ha-116280-m04:/home/docker/cp-test.txt ha-116280-m03:/home/docker/cp-test_ha-116280-m04_ha-116280-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 ssh -n ha-116280-m03 "sudo cat /home/docker/cp-test_ha-116280-m04_ha-116280-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 node stop m02 --alsologtostderr -v 5
E1018 09:21:10.472976  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-116280 node stop m02 --alsologtostderr -v 5: (13.540599586s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-116280 status --alsologtostderr -v 5: exit status 7 (675.667134ms)

                                                
                                                
-- stdout --
	ha-116280
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-116280-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-116280-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-116280-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:21:19.413907  199961 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:21:19.414131  199961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:21:19.414140  199961 out.go:374] Setting ErrFile to fd 2...
	I1018 09:21:19.414144  199961 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:21:19.414347  199961 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:21:19.414516  199961 out.go:368] Setting JSON to false
	I1018 09:21:19.414551  199961 mustload.go:65] Loading cluster: ha-116280
	I1018 09:21:19.414668  199961 notify.go:220] Checking for updates...
	I1018 09:21:19.414960  199961 config.go:182] Loaded profile config "ha-116280": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:21:19.414974  199961 status.go:174] checking status of ha-116280 ...
	I1018 09:21:19.415430  199961 cli_runner.go:164] Run: docker container inspect ha-116280 --format={{.State.Status}}
	I1018 09:21:19.437484  199961 status.go:371] ha-116280 host status = "Running" (err=<nil>)
	I1018 09:21:19.437535  199961 host.go:66] Checking if "ha-116280" exists ...
	I1018 09:21:19.437963  199961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-116280
	I1018 09:21:19.455375  199961 host.go:66] Checking if "ha-116280" exists ...
	I1018 09:21:19.455622  199961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:21:19.455674  199961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-116280
	I1018 09:21:19.473907  199961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/ha-116280/id_rsa Username:docker}
	I1018 09:21:19.568476  199961 ssh_runner.go:195] Run: systemctl --version
	I1018 09:21:19.574767  199961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:21:19.587032  199961 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:21:19.644346  199961 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-18 09:21:19.634125939 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:21:19.644889  199961 kubeconfig.go:125] found "ha-116280" server: "https://192.168.49.254:8443"
	I1018 09:21:19.644924  199961 api_server.go:166] Checking apiserver status ...
	I1018 09:21:19.644961  199961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:21:19.657342  199961 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1238/cgroup
	W1018 09:21:19.665975  199961 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1238/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:21:19.666029  199961 ssh_runner.go:195] Run: ls
	I1018 09:21:19.669974  199961 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 09:21:19.674249  199961 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 09:21:19.674272  199961 status.go:463] ha-116280 apiserver status = Running (err=<nil>)
	I1018 09:21:19.674280  199961 status.go:176] ha-116280 status: &{Name:ha-116280 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:21:19.674296  199961 status.go:174] checking status of ha-116280-m02 ...
	I1018 09:21:19.674511  199961 cli_runner.go:164] Run: docker container inspect ha-116280-m02 --format={{.State.Status}}
	I1018 09:21:19.691582  199961 status.go:371] ha-116280-m02 host status = "Stopped" (err=<nil>)
	I1018 09:21:19.691657  199961 status.go:384] host is not running, skipping remaining checks
	I1018 09:21:19.691663  199961 status.go:176] ha-116280-m02 status: &{Name:ha-116280-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:21:19.691688  199961 status.go:174] checking status of ha-116280-m03 ...
	I1018 09:21:19.691991  199961 cli_runner.go:164] Run: docker container inspect ha-116280-m03 --format={{.State.Status}}
	I1018 09:21:19.709053  199961 status.go:371] ha-116280-m03 host status = "Running" (err=<nil>)
	I1018 09:21:19.709077  199961 host.go:66] Checking if "ha-116280-m03" exists ...
	I1018 09:21:19.709497  199961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-116280-m03
	I1018 09:21:19.726493  199961 host.go:66] Checking if "ha-116280-m03" exists ...
	I1018 09:21:19.726775  199961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:21:19.726846  199961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-116280-m03
	I1018 09:21:19.744331  199961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/ha-116280-m03/id_rsa Username:docker}
	I1018 09:21:19.839101  199961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:21:19.851971  199961 kubeconfig.go:125] found "ha-116280" server: "https://192.168.49.254:8443"
	I1018 09:21:19.852003  199961 api_server.go:166] Checking apiserver status ...
	I1018 09:21:19.852043  199961 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:21:19.863048  199961 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1164/cgroup
	W1018 09:21:19.871337  199961 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1164/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:21:19.871397  199961 ssh_runner.go:195] Run: ls
	I1018 09:21:19.875299  199961 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1018 09:21:19.879358  199961 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1018 09:21:19.879379  199961 status.go:463] ha-116280-m03 apiserver status = Running (err=<nil>)
	I1018 09:21:19.879387  199961 status.go:176] ha-116280-m03 status: &{Name:ha-116280-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:21:19.879401  199961 status.go:174] checking status of ha-116280-m04 ...
	I1018 09:21:19.879632  199961 cli_runner.go:164] Run: docker container inspect ha-116280-m04 --format={{.State.Status}}
	I1018 09:21:19.898066  199961 status.go:371] ha-116280-m04 host status = "Running" (err=<nil>)
	I1018 09:21:19.898092  199961 host.go:66] Checking if "ha-116280-m04" exists ...
	I1018 09:21:19.898390  199961 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-116280-m04
	I1018 09:21:19.916930  199961 host.go:66] Checking if "ha-116280-m04" exists ...
	I1018 09:21:19.917176  199961 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:21:19.917214  199961 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-116280-m04
	I1018 09:21:19.935450  199961 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/ha-116280-m04/id_rsa Username:docker}
	I1018 09:21:20.029035  199961 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:21:20.041358  199961 status.go:176] ha-116280-m04 status: &{Name:ha-116280-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-116280 node start m02 --alsologtostderr -v 5: (13.722443771s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (119.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 stop --alsologtostderr -v 5
E1018 09:22:26.134706  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:22:26.141214  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:22:26.152610  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:22:26.174092  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:22:26.215588  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:22:26.297055  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:22:26.458633  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:22:26.780373  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-116280 stop --alsologtostderr -v 5: (50.930466852s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 start --wait true --alsologtostderr -v 5
E1018 09:22:27.421693  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:22:28.703116  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:22:31.264743  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:22:33.539349  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:22:36.386426  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:22:46.627792  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:23:07.109970  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-116280 start --wait true --alsologtostderr -v 5: (1m8.469220972s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (119.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-116280 node delete m03 --alsologtostderr -v 5: (9.707750897s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (43.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 stop --alsologtostderr -v 5
E1018 09:23:48.072026  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-116280 stop --alsologtostderr -v 5: (43.278157079s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-116280 status --alsologtostderr -v 5: exit status 7 (102.059186ms)

                                                
                                                
-- stdout --
	ha-116280
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-116280-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-116280-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:24:30.252364  214232 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:24:30.252589  214232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:24:30.252597  214232 out.go:374] Setting ErrFile to fd 2...
	I1018 09:24:30.252600  214232 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:24:30.252787  214232 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:24:30.252962  214232 out.go:368] Setting JSON to false
	I1018 09:24:30.252989  214232 mustload.go:65] Loading cluster: ha-116280
	I1018 09:24:30.253031  214232 notify.go:220] Checking for updates...
	I1018 09:24:30.253345  214232 config.go:182] Loaded profile config "ha-116280": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:24:30.253359  214232 status.go:174] checking status of ha-116280 ...
	I1018 09:24:30.253747  214232 cli_runner.go:164] Run: docker container inspect ha-116280 --format={{.State.Status}}
	I1018 09:24:30.271726  214232 status.go:371] ha-116280 host status = "Stopped" (err=<nil>)
	I1018 09:24:30.271768  214232 status.go:384] host is not running, skipping remaining checks
	I1018 09:24:30.271775  214232 status.go:176] ha-116280 status: &{Name:ha-116280 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:24:30.271864  214232 status.go:174] checking status of ha-116280-m02 ...
	I1018 09:24:30.272169  214232 cli_runner.go:164] Run: docker container inspect ha-116280-m02 --format={{.State.Status}}
	I1018 09:24:30.289691  214232 status.go:371] ha-116280-m02 host status = "Stopped" (err=<nil>)
	I1018 09:24:30.289713  214232 status.go:384] host is not running, skipping remaining checks
	I1018 09:24:30.289720  214232 status.go:176] ha-116280-m02 status: &{Name:ha-116280-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:24:30.289743  214232 status.go:174] checking status of ha-116280-m04 ...
	I1018 09:24:30.289990  214232 cli_runner.go:164] Run: docker container inspect ha-116280-m04 --format={{.State.Status}}
	I1018 09:24:30.306299  214232 status.go:371] ha-116280-m04 host status = "Stopped" (err=<nil>)
	I1018 09:24:30.306341  214232 status.go:384] host is not running, skipping remaining checks
	I1018 09:24:30.306355  214232 status.go:176] ha-116280-m04 status: &{Name:ha-116280-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (43.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (52.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1018 09:25:09.993510  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-116280 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (51.968985901s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (52.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (54.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 node add --control-plane --alsologtostderr -v 5
E1018 09:26:10.472253  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-116280 node add --control-plane --alsologtostderr -v 5: (53.954826947s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-116280 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (54.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.86s)

                                                
                                    
x
+
TestJSONOutput/start/Command (36.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-309581 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-309581 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (36.885569723s)
--- PASS: TestJSONOutput/start/Command (36.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.91s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-309581 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-309581 --output=json --user=testUser: (7.910337716s)
--- PASS: TestJSONOutput/stop/Command (7.91s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-427891 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-427891 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (61.732104ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"be6964a3-aee7-4c3e-bfb3-74fa4fd00634","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-427891] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a828af2-0b50-4ac3-9485-d72c0ed028c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21764"}}
	{"specversion":"1.0","id":"387292a6-c07e-413f-ad3c-6b12a0a48a55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0b9246e1-bb56-44a0-96c8-7c90651b790c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig"}}
	{"specversion":"1.0","id":"ab873565-33ce-45b6-8964-319562bc73bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube"}}
	{"specversion":"1.0","id":"dc8dd60e-3204-45b2-8554-e38f75f5ec63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"fe013929-0138-4bf6-8681-04c7c7fbf297","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f9aca356-d43a-4062-8029-2854efc1db5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-427891" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-427891
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.77s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-643790 --network=
E1018 09:27:26.130033  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:27:53.837473  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-643790 --network=: (33.6342991s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-643790" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-643790
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-643790: (2.116859665s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.77s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.98s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-689762 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-689762 --network=bridge: (21.978557111s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-689762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-689762
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-689762: (1.978175382s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.98s)

                                                
                                    
x
+
TestKicExistingNetwork (23.65s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1018 09:28:22.327083  134611 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1018 09:28:22.343772  134611 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1018 09:28:22.343875  134611 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1018 09:28:22.343903  134611 cli_runner.go:164] Run: docker network inspect existing-network
W1018 09:28:22.359932  134611 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1018 09:28:22.359961  134611 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1018 09:28:22.359980  134611 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1018 09:28:22.360082  134611 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1018 09:28:22.376626  134611 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-feaba2b58bf0 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:16:21:45:2e:39:fd} reservation:<nil>}
I1018 09:28:22.376970  134611 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000376ae0}
I1018 09:28:22.377000  134611 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1018 09:28:22.377042  134611 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1018 09:28:22.435258  134611 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-281037 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-281037 --network=existing-network: (21.549832469s)
helpers_test.go:175: Cleaning up "existing-network-281037" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-281037
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-281037: (1.959418044s)
I1018 09:28:45.962208  134611 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.65s)

                                                
                                    
x
+
TestKicCustomSubnet (25.66s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-310826 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-310826 --subnet=192.168.60.0/24: (23.526250701s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-310826 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-310826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-310826
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-310826: (2.11393375s)
--- PASS: TestKicCustomSubnet (25.66s)

                                                
                                    
x
+
TestKicStaticIP (27.3s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-934074 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-934074 --static-ip=192.168.200.200: (25.007759307s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-934074 ip
helpers_test.go:175: Cleaning up "static-ip-934074" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-934074
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-934074: (2.152411355s)
--- PASS: TestKicStaticIP (27.30s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (47.53s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-663180 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-663180 --driver=docker  --container-runtime=crio: (20.575588036s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-665733 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-665733 --driver=docker  --container-runtime=crio: (21.112876294s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-663180
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-665733
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-665733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-665733
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-665733: (2.326355998s)
helpers_test.go:175: Cleaning up "first-663180" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-663180
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-663180: (2.341082156s)
--- PASS: TestMinikubeProfile (47.53s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-587618 --memory=3072 --mount-string /tmp/TestMountStartserial832545429/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-587618 --memory=3072 --mount-string /tmp/TestMountStartserial832545429/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.748977136s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-587618 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-600688 --memory=3072 --mount-string /tmp/TestMountStartserial832545429/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-600688 --memory=3072 --mount-string /tmp/TestMountStartserial832545429/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.820310707s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-600688 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-587618 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-587618 --alsologtostderr -v=5: (1.689496786s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-600688 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-600688
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-600688: (1.241714327s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.77s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-600688
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-600688: (6.766077667s)
--- PASS: TestMountStart/serial/RestartStopped (7.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-600688 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (92.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-459756 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1018 09:31:10.473180  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:32:26.130138  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-459756 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m31.813296945s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (92.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-459756 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-459756 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-459756 -- rollout status deployment/busybox: (3.24216143s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-459756 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-459756 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-459756 -- exec busybox-7b57f96db7-mvz64 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-459756 -- exec busybox-7b57f96db7-wxmb4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-459756 -- exec busybox-7b57f96db7-mvz64 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-459756 -- exec busybox-7b57f96db7-wxmb4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-459756 -- exec busybox-7b57f96db7-mvz64 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-459756 -- exec busybox-7b57f96db7-wxmb4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.57s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-459756 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-459756 -- exec busybox-7b57f96db7-mvz64 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-459756 -- exec busybox-7b57f96db7-mvz64 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-459756 -- exec busybox-7b57f96db7-wxmb4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-459756 -- exec busybox-7b57f96db7-wxmb4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-459756 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-459756 -v=5 --alsologtostderr: (23.485940324s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.12s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-459756 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 cp testdata/cp-test.txt multinode-459756:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 cp multinode-459756:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3623565110/001/cp-test_multinode-459756.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 cp multinode-459756:/home/docker/cp-test.txt multinode-459756-m02:/home/docker/cp-test_multinode-459756_multinode-459756-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756-m02 "sudo cat /home/docker/cp-test_multinode-459756_multinode-459756-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 cp multinode-459756:/home/docker/cp-test.txt multinode-459756-m03:/home/docker/cp-test_multinode-459756_multinode-459756-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756-m03 "sudo cat /home/docker/cp-test_multinode-459756_multinode-459756-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 cp testdata/cp-test.txt multinode-459756-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 cp multinode-459756-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3623565110/001/cp-test_multinode-459756-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 cp multinode-459756-m02:/home/docker/cp-test.txt multinode-459756:/home/docker/cp-test_multinode-459756-m02_multinode-459756.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756 "sudo cat /home/docker/cp-test_multinode-459756-m02_multinode-459756.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 cp multinode-459756-m02:/home/docker/cp-test.txt multinode-459756-m03:/home/docker/cp-test_multinode-459756-m02_multinode-459756-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756-m03 "sudo cat /home/docker/cp-test_multinode-459756-m02_multinode-459756-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 cp testdata/cp-test.txt multinode-459756-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 cp multinode-459756-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3623565110/001/cp-test_multinode-459756-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 cp multinode-459756-m03:/home/docker/cp-test.txt multinode-459756:/home/docker/cp-test_multinode-459756-m03_multinode-459756.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756 "sudo cat /home/docker/cp-test_multinode-459756-m03_multinode-459756.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 cp multinode-459756-m03:/home/docker/cp-test.txt multinode-459756-m02:/home/docker/cp-test_multinode-459756-m03_multinode-459756-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 ssh -n multinode-459756-m02 "sudo cat /home/docker/cp-test_multinode-459756-m03_multinode-459756-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-459756 node stop m03: (1.256934029s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-459756 status: exit status 7 (481.928363ms)

                                                
                                                
-- stdout --
	multinode-459756
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-459756-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-459756-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-459756 status --alsologtostderr: exit status 7 (475.059475ms)

                                                
                                                
-- stdout --
	multinode-459756
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-459756-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-459756-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:33:08.146810  273900 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:33:08.147049  273900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:08.147059  273900 out.go:374] Setting ErrFile to fd 2...
	I1018 09:33:08.147066  273900 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:33:08.147260  273900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:33:08.147483  273900 out.go:368] Setting JSON to false
	I1018 09:33:08.147515  273900 mustload.go:65] Loading cluster: multinode-459756
	I1018 09:33:08.147642  273900 notify.go:220] Checking for updates...
	I1018 09:33:08.147973  273900 config.go:182] Loaded profile config "multinode-459756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:33:08.147990  273900 status.go:174] checking status of multinode-459756 ...
	I1018 09:33:08.148486  273900 cli_runner.go:164] Run: docker container inspect multinode-459756 --format={{.State.Status}}
	I1018 09:33:08.168331  273900 status.go:371] multinode-459756 host status = "Running" (err=<nil>)
	I1018 09:33:08.168390  273900 host.go:66] Checking if "multinode-459756" exists ...
	I1018 09:33:08.168872  273900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-459756
	I1018 09:33:08.186047  273900 host.go:66] Checking if "multinode-459756" exists ...
	I1018 09:33:08.186325  273900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:33:08.186390  273900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-459756
	I1018 09:33:08.203667  273900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33023 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/multinode-459756/id_rsa Username:docker}
	I1018 09:33:08.297296  273900 ssh_runner.go:195] Run: systemctl --version
	I1018 09:33:08.303567  273900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:33:08.315870  273900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:33:08.370537  273900 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-18 09:33:08.360222993 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:33:08.371096  273900 kubeconfig.go:125] found "multinode-459756" server: "https://192.168.67.2:8443"
	I1018 09:33:08.371131  273900 api_server.go:166] Checking apiserver status ...
	I1018 09:33:08.371172  273900 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1018 09:33:08.382751  273900 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup
	W1018 09:33:08.391017  273900 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1018 09:33:08.391077  273900 ssh_runner.go:195] Run: ls
	I1018 09:33:08.394798  273900 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1018 09:33:08.398958  273900 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1018 09:33:08.398982  273900 status.go:463] multinode-459756 apiserver status = Running (err=<nil>)
	I1018 09:33:08.398991  273900 status.go:176] multinode-459756 status: &{Name:multinode-459756 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:33:08.399006  273900 status.go:174] checking status of multinode-459756-m02 ...
	I1018 09:33:08.399309  273900 cli_runner.go:164] Run: docker container inspect multinode-459756-m02 --format={{.State.Status}}
	I1018 09:33:08.416254  273900 status.go:371] multinode-459756-m02 host status = "Running" (err=<nil>)
	I1018 09:33:08.416277  273900 host.go:66] Checking if "multinode-459756-m02" exists ...
	I1018 09:33:08.416508  273900 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-459756-m02
	I1018 09:33:08.433533  273900 host.go:66] Checking if "multinode-459756-m02" exists ...
	I1018 09:33:08.433780  273900 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1018 09:33:08.433813  273900 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-459756-m02
	I1018 09:33:08.450923  273900 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21764-131066/.minikube/machines/multinode-459756-m02/id_rsa Username:docker}
	I1018 09:33:08.543889  273900 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1018 09:33:08.555975  273900 status.go:176] multinode-459756-m02 status: &{Name:multinode-459756-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:33:08.556010  273900 status.go:174] checking status of multinode-459756-m03 ...
	I1018 09:33:08.556358  273900 cli_runner.go:164] Run: docker container inspect multinode-459756-m03 --format={{.State.Status}}
	I1018 09:33:08.573742  273900 status.go:371] multinode-459756-m03 host status = "Stopped" (err=<nil>)
	I1018 09:33:08.573761  273900 status.go:384] host is not running, skipping remaining checks
	I1018 09:33:08.573767  273900 status.go:176] multinode-459756-m03 status: &{Name:multinode-459756-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-459756 node start m03 -v=5 --alsologtostderr: (6.412092074s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (76.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-459756
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-459756
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-459756: (31.271882043s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-459756 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-459756 --wait=true -v=5 --alsologtostderr: (44.774772987s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-459756
--- PASS: TestMultiNode/serial/RestartKeepsNodes (76.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-459756 node delete m03: (4.61823684s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-459756 stop: (28.255727221s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-459756 status: exit status 7 (86.011156ms)

                                                
                                                
-- stdout --
	multinode-459756
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-459756-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-459756 status --alsologtostderr: exit status 7 (86.605953ms)

                                                
                                                
-- stdout --
	multinode-459756
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-459756-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:35:05.409522  283556 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:35:05.409629  283556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:35:05.409635  283556 out.go:374] Setting ErrFile to fd 2...
	I1018 09:35:05.409639  283556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:35:05.409844  283556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:35:05.410042  283556 out.go:368] Setting JSON to false
	I1018 09:35:05.410071  283556 mustload.go:65] Loading cluster: multinode-459756
	I1018 09:35:05.410213  283556 notify.go:220] Checking for updates...
	I1018 09:35:05.410456  283556 config.go:182] Loaded profile config "multinode-459756": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:35:05.410479  283556 status.go:174] checking status of multinode-459756 ...
	I1018 09:35:05.411082  283556 cli_runner.go:164] Run: docker container inspect multinode-459756 --format={{.State.Status}}
	I1018 09:35:05.429789  283556 status.go:371] multinode-459756 host status = "Stopped" (err=<nil>)
	I1018 09:35:05.429855  283556 status.go:384] host is not running, skipping remaining checks
	I1018 09:35:05.429872  283556 status.go:176] multinode-459756 status: &{Name:multinode-459756 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1018 09:35:05.429915  283556 status.go:174] checking status of multinode-459756-m02 ...
	I1018 09:35:05.430171  283556 cli_runner.go:164] Run: docker container inspect multinode-459756-m02 --format={{.State.Status}}
	I1018 09:35:05.448681  283556 status.go:371] multinode-459756-m02 host status = "Stopped" (err=<nil>)
	I1018 09:35:05.448717  283556 status.go:384] host is not running, skipping remaining checks
	I1018 09:35:05.448725  283556 status.go:176] multinode-459756-m02 status: &{Name:multinode-459756-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.43s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-459756 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-459756 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.140720138s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-459756 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-459756
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-459756-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-459756-m02 --driver=docker  --container-runtime=crio: exit status 14 (62.276086ms)

                                                
                                                
-- stdout --
	* [multinode-459756-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-459756-m02' is duplicated with machine name 'multinode-459756-m02' in profile 'multinode-459756'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-459756-m03 --driver=docker  --container-runtime=crio
E1018 09:36:10.475590  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-459756-m03 --driver=docker  --container-runtime=crio: (22.136201453s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-459756
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-459756: exit status 80 (280.648722ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-459756 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-459756-m03 already exists in multinode-459756-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-459756-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-459756-m03: (2.329258491s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.86s)

                                                
                                    
x
+
TestPreload (93.58s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-541536 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-541536 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (47.759941031s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-541536 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-541536 image pull gcr.io/k8s-minikube/busybox: (2.08705942s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-541536
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-541536: (5.912958202s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-541536 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1018 09:37:26.133776  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-541536 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (35.206279527s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-541536 image list
helpers_test.go:175: Cleaning up "test-preload-541536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-541536
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-541536: (2.408425679s)
--- PASS: TestPreload (93.58s)

                                                
                                    
x
+
TestScheduledStopUnix (97.33s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-624212 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-624212 --memory=3072 --driver=docker  --container-runtime=crio: (20.599549922s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-624212 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-624212 -n scheduled-stop-624212
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-624212 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1018 09:38:19.779396  134611 retry.go:31] will retry after 123.021µs: open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/scheduled-stop-624212/pid: no such file or directory
I1018 09:38:19.780574  134611 retry.go:31] will retry after 149.514µs: open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/scheduled-stop-624212/pid: no such file or directory
I1018 09:38:19.781719  134611 retry.go:31] will retry after 306.433µs: open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/scheduled-stop-624212/pid: no such file or directory
I1018 09:38:19.782874  134611 retry.go:31] will retry after 366.001µs: open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/scheduled-stop-624212/pid: no such file or directory
I1018 09:38:19.784006  134611 retry.go:31] will retry after 271.045µs: open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/scheduled-stop-624212/pid: no such file or directory
I1018 09:38:19.785158  134611 retry.go:31] will retry after 1.017822ms: open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/scheduled-stop-624212/pid: no such file or directory
I1018 09:38:19.786292  134611 retry.go:31] will retry after 601.436µs: open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/scheduled-stop-624212/pid: no such file or directory
I1018 09:38:19.787419  134611 retry.go:31] will retry after 1.362108ms: open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/scheduled-stop-624212/pid: no such file or directory
I1018 09:38:19.789650  134611 retry.go:31] will retry after 3.818046ms: open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/scheduled-stop-624212/pid: no such file or directory
I1018 09:38:19.793917  134611 retry.go:31] will retry after 4.843555ms: open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/scheduled-stop-624212/pid: no such file or directory
I1018 09:38:19.799117  134611 retry.go:31] will retry after 5.548736ms: open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/scheduled-stop-624212/pid: no such file or directory
I1018 09:38:19.805352  134611 retry.go:31] will retry after 12.878949ms: open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/scheduled-stop-624212/pid: no such file or directory
I1018 09:38:19.818607  134611 retry.go:31] will retry after 8.448893ms: open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/scheduled-stop-624212/pid: no such file or directory
I1018 09:38:19.827875  134611 retry.go:31] will retry after 23.337534ms: open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/scheduled-stop-624212/pid: no such file or directory
I1018 09:38:19.852117  134611 retry.go:31] will retry after 18.170627ms: open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/scheduled-stop-624212/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-624212 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-624212 -n scheduled-stop-624212
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-624212
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-624212 --schedule 15s
E1018 09:38:49.201193  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1018 09:39:13.543230  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-624212
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-624212: exit status 7 (67.399315ms)

                                                
                                                
-- stdout --
	scheduled-stop-624212
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-624212 -n scheduled-stop-624212
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-624212 -n scheduled-stop-624212: exit status 7 (65.293719ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-624212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-624212
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-624212: (5.406920606s)
--- PASS: TestScheduledStopUnix (97.33s)

                                                
                                    
x
+
TestInsufficientStorage (10.11s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-769366 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-769366 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.660008441s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"53d8520b-bb09-4a7d-97f4-88ff9c4b88d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-769366] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"66dcec58-613c-437d-82a7-7e09f887671b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21764"}}
	{"specversion":"1.0","id":"38f6c270-4e65-41bc-b7da-cd43c144424a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"011f4ffc-3436-4352-9166-f776acf47a50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig"}}
	{"specversion":"1.0","id":"7d655aac-224f-4a93-97a5-9e68b36ce810","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube"}}
	{"specversion":"1.0","id":"e3e5356f-89ee-4e60-978f-22427a42959d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"27477c89-4981-423b-919d-c28b6aa84711","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1d3712e0-b655-4f5e-82a9-2330f60754f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"faa1961c-98ba-41a1-ae2e-63b548666bd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f73e3606-1a7c-43e2-94ae-973776e30d22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba46eb45-456a-44ea-b8db-1423c6ef0597","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"8817e02b-ba39-4924-8093-01a03ae665be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-769366\" primary control-plane node in \"insufficient-storage-769366\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"52cfb6f3-1610-4c68-9365-a0e8df265807","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"cae1ae73-6f8d-4788-9ce0-e8dc077ae183","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7101aef-36b1-4ddf-9931-caf60ebab46e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-769366 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-769366 --output=json --layout=cluster: exit status 7 (278.808258ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-769366","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-769366","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 09:39:44.037164  303863 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-769366" does not appear in /home/jenkins/minikube-integration/21764-131066/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-769366 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-769366 --output=json --layout=cluster: exit status 7 (268.558395ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-769366","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-769366","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1018 09:39:44.306647  303974 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-769366" does not appear in /home/jenkins/minikube-integration/21764-131066/kubeconfig
	E1018 09:39:44.316576  303974 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/insufficient-storage-769366/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-769366" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-769366
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-769366: (1.904968235s)
--- PASS: TestInsufficientStorage (10.11s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (55.06s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2749628820 start -p running-upgrade-896586 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2749628820 start -p running-upgrade-896586 --memory=3072 --vm-driver=docker  --container-runtime=crio: (23.850360039s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-896586 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-896586 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.577588167s)
helpers_test.go:175: Cleaning up "running-upgrade-896586" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-896586
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-896586: (3.974226323s)
--- PASS: TestRunningBinaryUpgrade (55.06s)

                                                
                                    
x
+
TestKubernetesUpgrade (297.71s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.226738881s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-919613
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-919613: (1.942657922s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-919613 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-919613 status --format={{.Host}}: exit status 7 (91.805701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m24.110612985s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-919613 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (71.806521ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-919613] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-919613
	    minikube start -p kubernetes-upgrade-919613 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9196132 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-919613 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-919613 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.610966722s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-919613" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-919613
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-919613: (2.587049325s)
--- PASS: TestKubernetesUpgrade (297.71s)

                                                
                                    
x
+
TestMissingContainerUpgrade (77.97s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2650660069 start -p missing-upgrade-631894 --memory=3072 --driver=docker  --container-runtime=crio
E1018 09:41:10.472627  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2650660069 start -p missing-upgrade-631894 --memory=3072 --driver=docker  --container-runtime=crio: (23.322951258s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-631894
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-631894: (10.489039786s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-631894
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-631894 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-631894 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.638616012s)
helpers_test.go:175: Cleaning up "missing-upgrade-631894" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-631894
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-631894: (5.481051548s)
--- PASS: TestMissingContainerUpgrade (77.97s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-667751 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-667751 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (80.24744ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-667751] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-667751 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-667751 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.795911144s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-667751 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (74.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3096526143 start -p stopped-upgrade-698869 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3096526143 start -p stopped-upgrade-698869 --memory=3072 --vm-driver=docker  --container-runtime=crio: (44.861137614s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3096526143 -p stopped-upgrade-698869 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3096526143 -p stopped-upgrade-698869 stop: (14.118549756s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-698869 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-698869 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (15.145789362s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (74.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-667751 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-667751 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (14.853343276s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-667751 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-667751 status -o json: exit status 2 (301.926983ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-667751","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-667751
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-667751: (1.99156999s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-667751 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-667751 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.883725673s)
--- PASS: TestNoKubernetes/serial/Start (7.88s)

                                                
                                    
x
+
TestPause/serial/Start (39.97s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-238319 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-238319 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (39.970051428s)
--- PASS: TestPause/serial/Start (39.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-667751 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-667751 "sudo systemctl is-active --quiet service kubelet": exit status 1 (335.861933ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-667751
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-667751: (1.283601379s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-667751 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-667751 --driver=docker  --container-runtime=crio: (7.27681843s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-667751 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-667751 "sudo systemctl is-active --quiet service kubelet": exit status 1 (295.145553ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-698869
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-698869: (1.130947764s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-345705 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-345705 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (178.92309ms)

                                                
                                                
-- stdout --
	* [false-345705] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21764
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1018 09:41:04.793588  329725 out.go:360] Setting OutFile to fd 1 ...
	I1018 09:41:04.793886  329725 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:41:04.793894  329725 out.go:374] Setting ErrFile to fd 2...
	I1018 09:41:04.793906  329725 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1018 09:41:04.794238  329725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21764-131066/.minikube/bin
	I1018 09:41:04.795231  329725 out.go:368] Setting JSON to false
	I1018 09:41:04.796461  329725 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":5009,"bootTime":1760775456,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1018 09:41:04.796572  329725 start.go:141] virtualization: kvm guest
	I1018 09:41:04.797951  329725 out.go:179] * [false-345705] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1018 09:41:04.799526  329725 notify.go:220] Checking for updates...
	I1018 09:41:04.799561  329725 out.go:179]   - MINIKUBE_LOCATION=21764
	I1018 09:41:04.800849  329725 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1018 09:41:04.802002  329725 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21764-131066/kubeconfig
	I1018 09:41:04.803095  329725 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21764-131066/.minikube
	I1018 09:41:04.804100  329725 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1018 09:41:04.805241  329725 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1018 09:41:04.806977  329725 config.go:182] Loaded profile config "pause-238319": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1018 09:41:04.807135  329725 config.go:182] Loaded profile config "running-upgrade-896586": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1018 09:41:04.807273  329725 config.go:182] Loaded profile config "stopped-upgrade-698869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1018 09:41:04.807401  329725 driver.go:421] Setting default libvirt URI to qemu:///system
	I1018 09:41:04.832470  329725 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1018 09:41:04.832546  329725 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1018 09:41:04.911734  329725 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:83 SystemTime:2025-10-18 09:41:04.889958295 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1041-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1018 09:41:04.911911  329725 docker.go:318] overlay module found
	I1018 09:41:04.913654  329725 out.go:179] * Using the docker driver based on user configuration
	I1018 09:41:04.914738  329725 start.go:305] selected driver: docker
	I1018 09:41:04.914759  329725 start.go:925] validating driver "docker" against <nil>
	I1018 09:41:04.914787  329725 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1018 09:41:04.916778  329725 out.go:203] 
	W1018 09:41:04.918475  329725 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1018 09:41:04.919927  329725 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-345705 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-345705

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-345705

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-345705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-345705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-345705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-345705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-345705

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-345705

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-345705

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-345705

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-345705

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-345705" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-345705" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:40:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-896586
contexts:
- context:
cluster: running-upgrade-896586
user: running-upgrade-896586
name: running-upgrade-896586
current-context: ""
kind: Config
users:
- name: running-upgrade-896586
user:
client-certificate: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/running-upgrade-896586/client.crt
client-key: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/running-upgrade-896586/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-345705

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-345705"

                                                
                                                
----------------------- debugLogs end: false-345705 [took: 3.002826044s] --------------------------------
helpers_test.go:175: Cleaning up "false-345705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-345705
--- PASS: TestNetworkPlugins/group/false (3.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.13s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-238319 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-238319 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (8.112803752s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (53.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-619885 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-619885 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (53.492074273s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (53.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1018 09:42:26.129544  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.757066854s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-619885 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2e50d21c-d2e2-4cc7-b111-04c19153fc41] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2e50d21c-d2e2-4cc7-b111-04c19153fc41] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003369317s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-619885 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-589869 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [51be3b0e-97f1-4abd-863d-5069b9e73230] Pending
helpers_test.go:352: "busybox" [51be3b0e-97f1-4abd-863d-5069b9e73230] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [51be3b0e-97f1-4abd-863d-5069b9e73230] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003589605s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-589869 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-619885 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-619885 --alsologtostderr -v=3: (16.033230422s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-589869 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-589869 --alsologtostderr -v=3: (18.059210152s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-619885 -n old-k8s-version-619885
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-619885 -n old-k8s-version-619885: exit status 7 (71.996467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-619885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (29.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-619885 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-619885 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (29.029397797s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-619885 -n old-k8s-version-619885
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (29.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589869 -n no-preload-589869
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589869 -n no-preload-589869: exit status 7 (71.377177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-589869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (43.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-589869 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.230140188s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-589869 -n no-preload-589869
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (43.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-88pgw" [7390a37b-b66c-4dbe-85de-5ba96c9a7f24] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003987275s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-88pgw" [7390a37b-b66c-4dbe-85de-5ba96c9a7f24] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003315052s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-619885 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-619885 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (39.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-055175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-055175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (39.830604124s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (39.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cckhv" [8f48c99e-2020-467e-951d-38d637d68c79] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002875157s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cckhv" [8f48c99e-2020-467e-951d-38d637d68c79] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002940199s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-589869 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-589869 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-942905 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-942905 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.181052266s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-708733 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-708733 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (30.23890565s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-055175 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cbc79bc0-bf43-48ca-a6bc-937aa2d7fc9c] Pending
helpers_test.go:352: "busybox" [cbc79bc0-bf43-48ca-a6bc-937aa2d7fc9c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [cbc79bc0-bf43-48ca-a6bc-937aa2d7fc9c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.005192682s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-055175 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-055175 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-055175 --alsologtostderr -v=3: (18.175021689s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-708733 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-708733 --alsologtostderr -v=3: (12.419095089s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-055175 -n embed-certs-055175
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-055175 -n embed-certs-055175: exit status 7 (67.538194ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-055175 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-055175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-055175 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.85793093s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-055175 -n embed-certs-055175
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-942905 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3d931b08-4593-4046-8efd-e406a9611796] Pending
helpers_test.go:352: "busybox" [3d931b08-4593-4046-8efd-e406a9611796] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3d931b08-4593-4046-8efd-e406a9611796] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003246133s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-942905 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708733 -n newest-cni-708733
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708733 -n newest-cni-708733: exit status 7 (65.401991ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-708733 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-708733 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-708733 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (11.43022488s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-708733 -n newest-cni-708733
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-942905 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-942905 --alsologtostderr -v=3: (16.621488092s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-708733 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (39.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (39.813314799s)
--- PASS: TestNetworkPlugins/group/auto/Start (39.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-942905 -n default-k8s-diff-port-942905
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-942905 -n default-k8s-diff-port-942905: exit status 7 (71.055449ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-942905 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-942905 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1018 09:46:10.472990  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/addons-222746/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-942905 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.400208382s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-942905 -n default-k8s-diff-port-942905
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (48.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5ddr7" [e31c631f-8252-4b1d-bfff-16eb5c82009c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003641234s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5ddr7" [e31c631f-8252-4b1d-bfff-16eb5c82009c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003889557s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-055175 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-055175 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-345705 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-345705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2glvc" [118415bf-7cf4-4ab1-861a-b25f3489acb3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2glvc" [118415bf-7cf4-4ab1-861a-b25f3489acb3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004757373s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.467211706s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-345705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-345705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-345705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (55.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (55.313694362s)
--- PASS: TestNetworkPlugins/group/calico/Start (55.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4zp6s" [92964e9c-974b-45c0-99fd-c175df299295] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006129206s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4zp6s" [92964e9c-974b-45c0-99fd-c175df299295] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003660199s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-942905 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-942905 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (53.376891951s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (64.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m4.304652469s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (64.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-2v4hj" [8838d32b-829b-4338-87ed-40bbe4833023] Running
E1018 09:47:26.129423  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/functional-622052/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003683675s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-345705 "pgrep -a kubelet"
I1018 09:47:31.992542  134611 config.go:182] Loaded profile config "kindnet-345705": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-345705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-669n6" [352c9b83-be8d-418f-9be3-5620a6e26373] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-669n6" [352c9b83-be8d-418f-9be3-5620a6e26373] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004159716s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-345705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-345705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-pf2v4" [992c22f8-5836-47c0-9ac1-0021131d079e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004060026s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-345705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-345705 "pgrep -a kubelet"
I1018 09:47:47.649232  134611 config.go:182] Loaded profile config "calico-345705": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-345705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2mbjk" [c626101e-f29c-44d3-9dbc-59229e6b184a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2mbjk" [c626101e-f29c-44d3-9dbc-59229e6b184a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.005023474s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-345705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-345705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-345705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-345705 "pgrep -a kubelet"
I1018 09:47:59.183319  134611 config.go:182] Loaded profile config "custom-flannel-345705": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-345705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-4db2v" [ee71f3b8-78bf-427d-8ae5-792e5d3280bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-4db2v" [ee71f3b8-78bf-427d-8ae5-792e5d3280bd] Running
E1018 09:48:06.125294  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:48:06.131923  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:48:06.143283  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:48:06.164719  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:48:06.206123  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:48:06.287628  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:48:06.449376  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:48:06.770723  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:48:07.412567  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003759674s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (54.367993808s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-345705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-345705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-345705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1018 09:48:17.738542  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-345705 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m7.526549546s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-345705 "pgrep -a kubelet"
I1018 09:48:18.117661  134611 config.go:182] Loaded profile config "enable-default-cni-345705": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-345705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tmvgq" [7ece5bad-6bc4-4414-a721-95c9815a1f67] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1018 09:48:19.020187  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-tmvgq" [7ece5bad-6bc4-4414-a721-95c9815a1f67] Running
E1018 09:48:21.581802  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:48:26.620136  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/old-k8s-version-619885/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1018 09:48:26.703464  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003531871s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-345705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-345705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-345705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-wj2fl" [9879bd84-0ed1-4930-a84c-a2b20da2676e] Running
E1018 09:48:57.427523  134611 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/no-preload-589869/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003265616s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-345705 "pgrep -a kubelet"
I1018 09:49:01.424116  134611 config.go:182] Loaded profile config "flannel-345705": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-345705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xnfvh" [6835bfda-82a2-4cf2-addb-e3d0a53f36d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xnfvh" [6835bfda-82a2-4cf2-addb-e3d0a53f36d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004174752s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-345705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-345705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-345705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-345705 "pgrep -a kubelet"
I1018 09:49:25.282623  134611 config.go:182] Loaded profile config "bridge-345705": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-345705 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ddgwf" [bd860b7f-7fd2-4d8e-b153-668cb3012862] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ddgwf" [bd860b7f-7fd2-4d8e-b153-668cb3012862] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003608757s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-345705 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-345705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-345705 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    

Test skip (26/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-399936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-399936
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-345705 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-345705

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-345705

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-345705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-345705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-345705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-345705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-345705

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-345705

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-345705

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-345705

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-345705

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-345705" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-345705" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:40:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-896586
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:41:02 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-698869
contexts:
- context:
cluster: running-upgrade-896586
user: running-upgrade-896586
name: running-upgrade-896586
- context:
cluster: stopped-upgrade-698869
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:41:02 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: stopped-upgrade-698869
name: stopped-upgrade-698869
current-context: stopped-upgrade-698869
kind: Config
users:
- name: running-upgrade-896586
user:
client-certificate: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/running-upgrade-896586/client.crt
client-key: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/running-upgrade-896586/client.key
- name: stopped-upgrade-698869
user:
client-certificate: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/stopped-upgrade-698869/client.crt
client-key: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/stopped-upgrade-698869/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-345705

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-345705"

                                                
                                                
----------------------- debugLogs end: kubenet-345705 [took: 3.281341189s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-345705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-345705
--- SKIP: TestNetworkPlugins/group/kubenet (3.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-345705 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-345705

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-345705

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-345705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-345705

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-345705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-345705

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-345705

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-345705

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-345705

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-345705

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-345705

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-345705" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-345705

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-345705

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-345705

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-345705

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-345705" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-345705" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:41:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-238319
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21764-131066/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:40:56 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-896586
contexts:
- context:
cluster: pause-238319
extensions:
- extension:
last-update: Sat, 18 Oct 2025 09:41:10 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-238319
name: pause-238319
- context:
cluster: running-upgrade-896586
user: running-upgrade-896586
name: running-upgrade-896586
current-context: pause-238319
kind: Config
users:
- name: pause-238319
user:
client-certificate: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/client.crt
client-key: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/pause-238319/client.key
- name: running-upgrade-896586
user:
client-certificate: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/running-upgrade-896586/client.crt
client-key: /home/jenkins/minikube-integration/21764-131066/.minikube/profiles/running-upgrade-896586/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-345705

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-345705" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-345705"

                                                
                                                
----------------------- debugLogs end: cilium-345705 [took: 3.910880125s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-345705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-345705
--- SKIP: TestNetworkPlugins/group/cilium (4.07s)

                                                
                                    
Copied to clipboard